00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 111 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3289 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.022 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.041 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.068 Using shallow fetch with depth 1 00:00:00.068 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.068 > git --version # timeout=10 00:00:00.096 > git --version # 'git version 2.39.2' 00:00:00.096 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.140 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.140 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.236 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.251 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.266 Checking out Revision 8d05d9b748dd18cae96eb3802f97dd56ef08e163 (FETCH_HEAD) 00:00:02.266 > git config core.sparsecheckout # timeout=10 00:00:02.279 > git read-tree -mu HEAD # timeout=10 00:00:02.297 > git checkout -f 8d05d9b748dd18cae96eb3802f97dd56ef08e163 # timeout=5 00:00:02.316 Commit message: "jjb/jobs: reduce repetitive accel tests execution" 00:00:02.316 > git rev-list --no-walk 8d05d9b748dd18cae96eb3802f97dd56ef08e163 # timeout=10 00:00:02.468 [Pipeline] Start of Pipeline 00:00:02.486 [Pipeline] library 00:00:02.489 Loading library shm_lib@master 00:00:02.489 Library shm_lib@master is cached. Copying from home. 00:00:02.513 [Pipeline] node 00:00:02.529 Running on WFP29 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.531 [Pipeline] { 00:00:02.545 [Pipeline] catchError 00:00:02.547 [Pipeline] { 00:00:02.561 [Pipeline] wrap 00:00:02.571 [Pipeline] { 00:00:02.577 [Pipeline] stage 00:00:02.578 [Pipeline] { (Prologue) 00:00:02.811 [Pipeline] sh 00:00:03.088 + logger -p user.info -t JENKINS-CI 00:00:03.100 [Pipeline] echo 00:00:03.101 Node: WFP29 00:00:03.107 [Pipeline] sh 00:00:03.399 [Pipeline] setCustomBuildProperty 00:00:03.411 [Pipeline] echo 00:00:03.413 Cleanup processes 00:00:03.417 [Pipeline] sh 00:00:03.693 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.693 3296541 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.707 [Pipeline] sh 00:00:03.981 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.981 ++ grep -v 'sudo pgrep' 00:00:03.981 ++ awk '{print $1}' 00:00:03.981 + sudo kill -9 00:00:03.981 + true 00:00:03.994 [Pipeline] cleanWs 00:00:04.005 [WS-CLEANUP] Deleting project workspace... 00:00:04.005 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.012 [WS-CLEANUP] done 00:00:04.018 [Pipeline] setCustomBuildProperty 00:00:04.032 [Pipeline] sh 00:00:04.310 + sudo git config --global --replace-all safe.directory '*' 00:00:04.381 [Pipeline] httpRequest 00:00:04.400 [Pipeline] echo 00:00:04.401 Sorcerer 10.211.164.101 is alive 00:00:04.409 [Pipeline] httpRequest 00:00:04.413 HttpMethod: GET 00:00:04.413 URL: http://10.211.164.101/packages/jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:04.414 Sending request to url: http://10.211.164.101/packages/jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:04.415 Response Code: HTTP/1.1 200 OK 00:00:04.416 Success: Status code 200 is in the accepted range: 200,404 00:00:04.417 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:04.561 [Pipeline] sh 00:00:04.839 + tar --no-same-owner -xf jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:04.854 [Pipeline] httpRequest 00:00:04.871 [Pipeline] echo 00:00:04.873 Sorcerer 10.211.164.101 is alive 00:00:04.881 [Pipeline] httpRequest 00:00:04.885 HttpMethod: GET 00:00:04.886 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:04.886 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:04.887 Response Code: HTTP/1.1 200 OK 00:00:04.887 Success: Status code 200 is in the accepted range: 200,404 00:00:04.888 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:14.693 [Pipeline] sh 00:00:14.978 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:17.529 [Pipeline] sh 00:00:17.816 + git -C spdk log --oneline -n5 00:00:17.816 241d0f3c9 test: fix dpdk builds on ubuntu24 00:00:17.816 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:00:17.816 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:17.816 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:17.816 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:17.836 [Pipeline] withCredentials 00:00:17.848 > git --version # timeout=10 00:00:17.861 > git --version # 'git version 2.39.2' 00:00:17.880 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:17.883 [Pipeline] { 00:00:17.893 [Pipeline] retry 00:00:17.896 [Pipeline] { 00:00:17.916 [Pipeline] sh 00:00:18.200 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:19.178 [Pipeline] } 00:00:19.202 [Pipeline] // retry 00:00:19.208 [Pipeline] } 00:00:19.226 [Pipeline] // withCredentials 00:00:19.236 [Pipeline] httpRequest 00:00:19.258 [Pipeline] echo 00:00:19.260 Sorcerer 10.211.164.101 is alive 00:00:19.269 [Pipeline] httpRequest 00:00:19.274 HttpMethod: GET 00:00:19.274 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:19.275 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:19.277 Response Code: HTTP/1.1 200 OK 00:00:19.278 Success: Status code 200 is in the accepted range: 200,404 00:00:19.279 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:25.768 [Pipeline] sh 00:00:26.053 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:27.442 [Pipeline] sh 00:00:27.725 + git -C dpdk log --oneline -n5 00:00:27.725 caf0f5d395 version: 22.11.4 00:00:27.725 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:27.725 dc9c799c7d vhost: fix missing spinlock unlock 00:00:27.725 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:27.725 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:27.738 [Pipeline] } 00:00:27.760 [Pipeline] // stage 00:00:27.771 [Pipeline] stage 00:00:27.773 [Pipeline] { (Prepare) 00:00:27.795 [Pipeline] writeFile 00:00:27.814 [Pipeline] sh 00:00:28.096 + logger -p user.info -t JENKINS-CI 00:00:28.110 [Pipeline] sh 00:00:28.393 + logger -p user.info -t JENKINS-CI 00:00:28.407 [Pipeline] sh 00:00:28.691 + cat autorun-spdk.conf 00:00:28.691 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.691 SPDK_RUN_UBSAN=1 00:00:28.691 SPDK_TEST_FUZZER=1 00:00:28.691 SPDK_TEST_FUZZER_SHORT=1 00:00:28.691 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:28.691 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:28.698 RUN_NIGHTLY=1 00:00:28.703 [Pipeline] readFile 00:00:28.730 [Pipeline] withEnv 00:00:28.733 [Pipeline] { 00:00:28.746 [Pipeline] sh 00:00:29.024 + set -ex 00:00:29.024 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:29.024 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:29.024 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.024 ++ SPDK_RUN_UBSAN=1 00:00:29.024 ++ SPDK_TEST_FUZZER=1 00:00:29.024 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:29.024 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:29.024 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:29.024 ++ RUN_NIGHTLY=1 00:00:29.024 + case $SPDK_TEST_NVMF_NICS in 00:00:29.024 + DRIVERS= 00:00:29.024 + [[ -n '' ]] 00:00:29.024 + exit 0 00:00:29.033 [Pipeline] } 00:00:29.050 [Pipeline] // withEnv 00:00:29.053 [Pipeline] } 00:00:29.064 [Pipeline] // stage 00:00:29.073 [Pipeline] catchError 00:00:29.074 [Pipeline] { 00:00:29.083 [Pipeline] timeout 00:00:29.084 Timeout set to expire in 30 min 00:00:29.085 [Pipeline] { 00:00:29.103 [Pipeline] stage 00:00:29.105 [Pipeline] { (Tests) 00:00:29.120 [Pipeline] sh 00:00:29.404 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.404 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.404 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.404 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:29.404 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:29.404 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.404 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:29.404 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.404 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.404 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.404 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:29.404 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.404 + source /etc/os-release 00:00:29.404 ++ NAME='Fedora Linux' 00:00:29.404 ++ VERSION='38 (Cloud Edition)' 00:00:29.404 ++ ID=fedora 00:00:29.404 ++ VERSION_ID=38 00:00:29.404 ++ VERSION_CODENAME= 00:00:29.404 ++ PLATFORM_ID=platform:f38 00:00:29.404 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.404 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.404 ++ LOGO=fedora-logo-icon 00:00:29.404 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.404 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.404 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.404 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.404 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.404 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.404 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.404 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.404 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.404 ++ SUPPORT_END=2024-05-14 00:00:29.404 ++ VARIANT='Cloud Edition' 00:00:29.404 ++ VARIANT_ID=cloud 00:00:29.404 + uname -a 00:00:29.404 Linux spdk-wfp-29 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.404 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:32.693 Hugepages 00:00:32.693 node hugesize free / total 00:00:32.693 node0 1048576kB 0 / 0 00:00:32.693 node0 2048kB 0 / 0 00:00:32.693 node1 1048576kB 0 / 0 00:00:32.693 node1 2048kB 0 / 0 00:00:32.693 00:00:32.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.693 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:32.693 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:32.693 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:32.693 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:32.693 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:32.693 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:32.693 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:32.693 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:32.693 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:32.693 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:32.693 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:32.693 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:32.693 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:32.693 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:32.693 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:32.693 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:32.693 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:32.693 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:00:32.693 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:00:32.693 + rm -f /tmp/spdk-ld-path 00:00:32.693 + source autorun-spdk.conf 00:00:32.693 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.693 ++ SPDK_RUN_UBSAN=1 00:00:32.693 ++ SPDK_TEST_FUZZER=1 00:00:32.693 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:32.693 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:32.693 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:32.693 ++ RUN_NIGHTLY=1 00:00:32.693 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.693 + [[ -n '' ]] 00:00:32.693 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:32.693 + for M in /var/spdk/build-*-manifest.txt 00:00:32.693 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:32.693 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.693 + for M in /var/spdk/build-*-manifest.txt 00:00:32.693 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:32.693 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.693 ++ uname 00:00:32.693 + [[ Linux == \L\i\n\u\x ]] 00:00:32.693 + sudo dmesg -T 00:00:32.693 + sudo dmesg --clear 00:00:32.693 + dmesg_pid=3298063 00:00:32.693 + [[ Fedora Linux == FreeBSD ]] 00:00:32.693 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.693 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.693 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:32.694 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:32.694 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:32.694 + [[ -x /usr/src/fio-static/fio ]] 00:00:32.694 + sudo dmesg -Tw 00:00:32.694 + export FIO_BIN=/usr/src/fio-static/fio 00:00:32.694 + FIO_BIN=/usr/src/fio-static/fio 00:00:32.694 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:32.694 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:32.694 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:32.694 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.694 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.694 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:32.694 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.694 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.694 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:32.694 Test configuration: 00:00:32.694 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.694 SPDK_RUN_UBSAN=1 00:00:32.694 SPDK_TEST_FUZZER=1 00:00:32.694 SPDK_TEST_FUZZER_SHORT=1 00:00:32.694 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:32.694 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:32.694 RUN_NIGHTLY=1 10:21:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:32.694 10:21:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:32.694 10:21:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:32.694 10:21:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:32.694 10:21:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.694 10:21:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.694 10:21:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.694 10:21:21 -- paths/export.sh@5 -- $ export PATH 00:00:32.694 10:21:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.694 10:21:21 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:32.694 10:21:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:00:32.694 10:21:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721722881.XXXXXX 00:00:32.694 10:21:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721722881.wDknhV 00:00:32.694 10:21:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:00:32.694 10:21:21 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:00:32.694 10:21:21 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:32.694 10:21:21 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:00:32.694 10:21:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:32.694 10:21:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:32.694 10:21:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:00:32.694 10:21:21 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:32.694 10:21:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.694 10:21:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:00:32.694 10:21:21 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:00:32.694 10:21:21 -- pm/common@17 -- $ local monitor 00:00:32.694 10:21:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.694 10:21:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.694 10:21:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.694 10:21:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.694 10:21:21 -- pm/common@21 -- $ date +%s 00:00:32.694 10:21:21 -- pm/common@25 -- $ sleep 1 00:00:32.694 10:21:21 -- pm/common@21 -- $ date +%s 00:00:32.694 10:21:21 -- pm/common@21 -- $ date +%s 00:00:32.694 10:21:21 -- pm/common@21 -- $ date +%s 00:00:32.694 10:21:21 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722881 00:00:32.694 10:21:21 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722881 00:00:32.694 10:21:21 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722881 00:00:32.694 10:21:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722881 00:00:32.694 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722881_collect-vmstat.pm.log 00:00:32.694 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722881_collect-cpu-load.pm.log 00:00:32.694 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722881_collect-cpu-temp.pm.log 00:00:32.694 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722881_collect-bmc-pm.bmc.pm.log 00:00:33.630 10:21:22 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:00:33.630 10:21:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:33.630 10:21:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:33.630 10:21:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:33.630 10:21:22 -- spdk/autobuild.sh@16 -- $ date -u 00:00:33.630 Tue Jul 23 08:21:22 AM UTC 2024 00:00:33.630 10:21:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:33.630 v24.05-15-g241d0f3c9 00:00:33.630 10:21:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:33.631 10:21:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:33.631 10:21:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:33.631 10:21:22 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:33.631 10:21:22 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:33.631 10:21:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.631 ************************************ 00:00:33.631 START TEST ubsan 00:00:33.631 ************************************ 00:00:33.631 10:21:22 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:33.631 using ubsan 00:00:33.631 00:00:33.631 real 0m0.000s 00:00:33.631 user 0m0.000s 00:00:33.631 sys 0m0.000s 00:00:33.631 10:21:22 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:33.631 10:21:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:33.631 ************************************ 00:00:33.631 END TEST ubsan 00:00:33.631 ************************************ 00:00:33.890 10:21:22 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:00:33.890 10:21:22 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:33.890 10:21:22 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:33.890 10:21:22 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:00:33.890 10:21:22 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:33.890 10:21:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.890 ************************************ 00:00:33.890 START TEST build_native_dpdk 00:00:33.890 ************************************ 00:00:33.890 10:21:22 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk ]] 00:00:33.890 10:21:22 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk log --oneline -n 5 00:00:33.891 caf0f5d395 version: 22.11.4 00:00:33.891 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:33.891 dc9c799c7d vhost: fix missing spinlock unlock 00:00:33.891 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:33.891 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:33.891 patching file config/rte_config.h 00:00:33.891 Hunk #1 succeeded at 60 (offset 1 line). 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:33.891 10:21:22 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:00:33.891 patching file lib/pcapng/rte_pcapng.c 00:00:33.891 Hunk #1 succeeded at 110 (offset -18 lines). 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:33.891 10:21:22 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:39.168 The Meson build system 00:00:39.168 Version: 1.3.1 00:00:39.168 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:00:39.168 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp 00:00:39.168 Build type: native build 00:00:39.168 Program cat found: YES (/usr/bin/cat) 00:00:39.168 Project name: DPDK 00:00:39.168 Project version: 22.11.4 00:00:39.168 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:39.168 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:39.168 Host machine cpu family: x86_64 00:00:39.168 Host machine cpu: x86_64 00:00:39.168 Message: ## Building in Developer Mode ## 00:00:39.168 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:39.168 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:39.168 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:39.168 Program objdump found: YES (/usr/bin/objdump) 00:00:39.168 Program python3 found: YES (/usr/bin/python3) 00:00:39.168 Program cat found: YES (/usr/bin/cat) 00:00:39.168 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:39.168 Checking for size of "void *" : 8 00:00:39.168 Checking for size of "void *" : 8 (cached) 00:00:39.168 Library m found: YES 00:00:39.168 Library numa found: YES 00:00:39.168 Has header "numaif.h" : YES 00:00:39.168 Library fdt found: NO 00:00:39.168 Library execinfo found: NO 00:00:39.168 Has header "execinfo.h" : YES 00:00:39.168 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:39.169 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:39.169 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:39.169 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:39.169 Run-time dependency openssl found: YES 3.0.9 00:00:39.169 Run-time dependency libpcap found: YES 1.10.4 00:00:39.169 Has header "pcap.h" with dependency libpcap: YES 00:00:39.169 Compiler for C supports arguments -Wcast-qual: YES 00:00:39.169 Compiler for C supports arguments -Wdeprecated: YES 00:00:39.169 Compiler for C supports arguments -Wformat: YES 00:00:39.169 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:39.169 Compiler for C supports arguments -Wformat-security: NO 00:00:39.169 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:39.169 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:39.169 Compiler for C supports arguments -Wnested-externs: YES 00:00:39.169 Compiler for C supports arguments -Wold-style-definition: YES 00:00:39.169 Compiler for C supports arguments -Wpointer-arith: YES 00:00:39.169 Compiler for C supports arguments -Wsign-compare: YES 00:00:39.169 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:39.169 Compiler for C supports arguments -Wundef: YES 00:00:39.169 Compiler for C supports arguments -Wwrite-strings: YES 00:00:39.169 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:39.169 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:39.169 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:39.169 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:39.169 Compiler for C supports arguments -mavx512f: YES 00:00:39.169 Checking if "AVX512 checking" compiles: YES 00:00:39.169 Fetching value of define "__SSE4_2__" : 1 00:00:39.169 Fetching value of define "__AES__" : 1 00:00:39.169 Fetching value of define "__AVX__" : 1 00:00:39.169 Fetching value of define "__AVX2__" : 1 00:00:39.169 Fetching value of define "__AVX512BW__" : 1 00:00:39.169 Fetching value of define "__AVX512CD__" : 1 00:00:39.169 Fetching value of define "__AVX512DQ__" : 1 00:00:39.169 Fetching value of define "__AVX512F__" : 1 00:00:39.169 Fetching value of define "__AVX512VL__" : 1 00:00:39.169 Fetching value of define "__PCLMUL__" : 1 00:00:39.169 Fetching value of define "__RDRND__" : 1 00:00:39.169 Fetching value of define "__RDSEED__" : 1 00:00:39.169 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:39.169 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:39.169 Message: lib/kvargs: Defining dependency "kvargs" 00:00:39.169 Message: lib/telemetry: Defining dependency "telemetry" 00:00:39.169 Checking for function "getentropy" : YES 00:00:39.169 Message: lib/eal: Defining dependency "eal" 00:00:39.169 Message: lib/ring: Defining dependency "ring" 00:00:39.169 Message: lib/rcu: Defining dependency "rcu" 00:00:39.169 Message: lib/mempool: Defining dependency "mempool" 00:00:39.169 Message: lib/mbuf: Defining dependency "mbuf" 00:00:39.169 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:39.169 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:00:39.169 Compiler for C supports arguments -mpclmul: YES 00:00:39.169 Compiler for C supports arguments -maes: YES 00:00:39.169 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:39.169 Compiler for C supports arguments -mavx512bw: YES 00:00:39.169 Compiler for C supports arguments -mavx512dq: YES 00:00:39.169 Compiler for C supports arguments -mavx512vl: YES 00:00:39.169 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:39.169 Compiler for C supports arguments -mavx2: YES 00:00:39.169 Compiler for C supports arguments -mavx: YES 00:00:39.169 Message: lib/net: Defining dependency "net" 00:00:39.169 Message: lib/meter: Defining dependency "meter" 00:00:39.169 Message: lib/ethdev: Defining dependency "ethdev" 00:00:39.169 Message: lib/pci: Defining dependency "pci" 00:00:39.169 Message: lib/cmdline: Defining dependency "cmdline" 00:00:39.169 Message: lib/metrics: Defining dependency "metrics" 00:00:39.169 Message: lib/hash: Defining dependency "hash" 00:00:39.169 Message: lib/timer: Defining dependency "timer" 00:00:39.169 Fetching value of define "__AVX2__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512CD__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:39.169 Message: lib/acl: Defining dependency "acl" 00:00:39.169 Message: lib/bbdev: Defining dependency "bbdev" 00:00:39.169 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:39.169 Run-time dependency libelf found: YES 0.190 00:00:39.169 Message: lib/bpf: Defining dependency "bpf" 00:00:39.169 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:39.169 Message: lib/compressdev: Defining dependency "compressdev" 00:00:39.169 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:39.169 Message: lib/distributor: Defining dependency "distributor" 00:00:39.169 Message: lib/efd: Defining dependency "efd" 00:00:39.169 Message: lib/eventdev: Defining dependency "eventdev" 00:00:39.169 Message: lib/gpudev: Defining dependency "gpudev" 00:00:39.169 Message: lib/gro: Defining dependency "gro" 00:00:39.169 Message: lib/gso: Defining dependency "gso" 00:00:39.169 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:39.169 Message: lib/jobstats: Defining dependency "jobstats" 00:00:39.169 Message: lib/latencystats: Defining dependency "latencystats" 00:00:39.169 Message: lib/lpm: Defining dependency "lpm" 00:00:39.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:39.169 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:39.169 Message: lib/member: Defining dependency "member" 00:00:39.169 Message: lib/pcapng: Defining dependency "pcapng" 00:00:39.169 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:39.169 Message: lib/power: Defining dependency "power" 00:00:39.169 Message: lib/rawdev: Defining dependency "rawdev" 00:00:39.169 Message: lib/regexdev: Defining dependency "regexdev" 00:00:39.169 Message: lib/dmadev: Defining dependency "dmadev" 00:00:39.169 Message: lib/rib: Defining dependency "rib" 00:00:39.169 Message: lib/reorder: Defining dependency "reorder" 00:00:39.169 Message: lib/sched: Defining dependency "sched" 00:00:39.169 Message: lib/security: Defining dependency "security" 00:00:39.169 Message: lib/stack: Defining dependency "stack" 00:00:39.169 Has header "linux/userfaultfd.h" : YES 00:00:39.169 Message: lib/vhost: Defining dependency "vhost" 00:00:39.169 Message: lib/ipsec: Defining dependency "ipsec" 00:00:39.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:39.169 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:39.169 Message: lib/fib: Defining dependency "fib" 00:00:39.169 Message: lib/port: Defining dependency "port" 00:00:39.169 Message: lib/pdump: Defining dependency "pdump" 00:00:39.169 Message: lib/table: Defining dependency "table" 00:00:39.169 Message: lib/pipeline: Defining dependency "pipeline" 00:00:39.169 Message: lib/graph: Defining dependency "graph" 00:00:39.169 Message: lib/node: Defining dependency "node" 00:00:39.169 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:39.169 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:39.169 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:39.169 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:39.169 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:39.169 Compiler for C supports arguments -Wno-unused-value: YES 00:00:39.169 Compiler for C supports arguments -Wno-format: YES 00:00:39.169 Compiler for C supports arguments -Wno-format-security: YES 00:00:39.169 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:39.430 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:39.430 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:39.430 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:39.430 Fetching value of define "__AVX2__" : 1 (cached) 00:00:39.430 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:39.430 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:39.430 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:39.430 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:39.430 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:39.430 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:39.430 Program doxygen found: YES (/usr/bin/doxygen) 00:00:39.430 Configuring doxy-api.conf using configuration 00:00:39.430 Program sphinx-build found: NO 00:00:39.430 Configuring rte_build_config.h using configuration 00:00:39.430 Message: 00:00:39.430 ================= 00:00:39.430 Applications Enabled 00:00:39.430 ================= 00:00:39.430 00:00:39.430 apps: 00:00:39.430 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:00:39.430 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:00:39.430 test-security-perf, 00:00:39.430 00:00:39.430 Message: 00:00:39.430 ================= 00:00:39.430 Libraries Enabled 00:00:39.430 ================= 00:00:39.430 00:00:39.430 libs: 00:00:39.430 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:00:39.430 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:00:39.430 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:00:39.430 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:00:39.430 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:00:39.430 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:00:39.430 table, pipeline, graph, node, 00:00:39.430 00:00:39.430 Message: 00:00:39.430 =============== 00:00:39.430 Drivers Enabled 00:00:39.430 =============== 00:00:39.430 00:00:39.430 common: 00:00:39.430 00:00:39.430 bus: 00:00:39.430 pci, vdev, 00:00:39.430 mempool: 00:00:39.430 ring, 00:00:39.430 dma: 00:00:39.430 00:00:39.430 net: 00:00:39.430 i40e, 00:00:39.430 raw: 00:00:39.430 00:00:39.430 crypto: 00:00:39.430 00:00:39.430 compress: 00:00:39.430 00:00:39.430 regex: 00:00:39.430 00:00:39.430 vdpa: 00:00:39.430 00:00:39.430 event: 00:00:39.430 00:00:39.430 baseband: 00:00:39.430 00:00:39.430 gpu: 00:00:39.430 00:00:39.430 00:00:39.430 Message: 00:00:39.430 ================= 00:00:39.430 Content Skipped 00:00:39.430 ================= 00:00:39.430 00:00:39.430 apps: 00:00:39.430 00:00:39.430 libs: 00:00:39.430 kni: explicitly disabled via build config (deprecated lib) 00:00:39.430 flow_classify: explicitly disabled via build config (deprecated lib) 00:00:39.430 00:00:39.430 drivers: 00:00:39.430 common/cpt: not in enabled drivers build config 00:00:39.430 common/dpaax: not in enabled drivers build config 00:00:39.430 common/iavf: not in enabled drivers build config 00:00:39.430 common/idpf: not in enabled drivers build config 00:00:39.430 common/mvep: not in enabled drivers build config 00:00:39.430 common/octeontx: not in enabled drivers build config 00:00:39.430 bus/auxiliary: not in enabled drivers build config 00:00:39.430 bus/dpaa: not in enabled drivers build config 00:00:39.430 bus/fslmc: not in enabled drivers build config 00:00:39.430 bus/ifpga: not in enabled drivers build config 00:00:39.430 bus/vmbus: not in enabled drivers build config 00:00:39.430 common/cnxk: not in enabled drivers build config 00:00:39.430 common/mlx5: not in enabled drivers build config 00:00:39.430 common/qat: not in enabled drivers build config 00:00:39.430 common/sfc_efx: not in enabled drivers build config 00:00:39.430 mempool/bucket: not in enabled drivers build config 00:00:39.430 mempool/cnxk: not in enabled drivers build config 00:00:39.430 mempool/dpaa: not in enabled drivers build config 00:00:39.430 mempool/dpaa2: not in enabled drivers build config 00:00:39.430 mempool/octeontx: not in enabled drivers build config 00:00:39.430 mempool/stack: not in enabled drivers build config 00:00:39.430 dma/cnxk: not in enabled drivers build config 00:00:39.430 dma/dpaa: not in enabled drivers build config 00:00:39.430 dma/dpaa2: not in enabled drivers build config 00:00:39.430 dma/hisilicon: not in enabled drivers build config 00:00:39.430 dma/idxd: not in enabled drivers build config 00:00:39.430 dma/ioat: not in enabled drivers build config 00:00:39.430 dma/skeleton: not in enabled drivers build config 00:00:39.430 net/af_packet: not in enabled drivers build config 00:00:39.430 net/af_xdp: not in enabled drivers build config 00:00:39.430 net/ark: not in enabled drivers build config 00:00:39.430 net/atlantic: not in enabled drivers build config 00:00:39.430 net/avp: not in enabled drivers build config 00:00:39.430 net/axgbe: not in enabled drivers build config 00:00:39.430 net/bnx2x: not in enabled drivers build config 00:00:39.430 net/bnxt: not in enabled drivers build config 00:00:39.430 net/bonding: not in enabled drivers build config 00:00:39.431 net/cnxk: not in enabled drivers build config 00:00:39.431 net/cxgbe: not in enabled drivers build config 00:00:39.431 net/dpaa: not in enabled drivers build config 00:00:39.431 net/dpaa2: not in enabled drivers build config 00:00:39.431 net/e1000: not in enabled drivers build config 00:00:39.431 net/ena: not in enabled drivers build config 00:00:39.431 net/enetc: not in enabled drivers build config 00:00:39.431 net/enetfec: not in enabled drivers build config 00:00:39.431 net/enic: not in enabled drivers build config 00:00:39.431 net/failsafe: not in enabled drivers build config 00:00:39.431 net/fm10k: not in enabled drivers build config 00:00:39.431 net/gve: not in enabled drivers build config 00:00:39.431 net/hinic: not in enabled drivers build config 00:00:39.431 net/hns3: not in enabled drivers build config 00:00:39.431 net/iavf: not in enabled drivers build config 00:00:39.431 net/ice: not in enabled drivers build config 00:00:39.431 net/idpf: not in enabled drivers build config 00:00:39.431 net/igc: not in enabled drivers build config 00:00:39.431 net/ionic: not in enabled drivers build config 00:00:39.431 net/ipn3ke: not in enabled drivers build config 00:00:39.431 net/ixgbe: not in enabled drivers build config 00:00:39.431 net/kni: not in enabled drivers build config 00:00:39.431 net/liquidio: not in enabled drivers build config 00:00:39.431 net/mana: not in enabled drivers build config 00:00:39.431 net/memif: not in enabled drivers build config 00:00:39.431 net/mlx4: not in enabled drivers build config 00:00:39.431 net/mlx5: not in enabled drivers build config 00:00:39.431 net/mvneta: not in enabled drivers build config 00:00:39.431 net/mvpp2: not in enabled drivers build config 00:00:39.431 net/netvsc: not in enabled drivers build config 00:00:39.431 net/nfb: not in enabled drivers build config 00:00:39.431 net/nfp: not in enabled drivers build config 00:00:39.431 net/ngbe: not in enabled drivers build config 00:00:39.431 net/null: not in enabled drivers build config 00:00:39.431 net/octeontx: not in enabled drivers build config 00:00:39.431 net/octeon_ep: not in enabled drivers build config 00:00:39.431 net/pcap: not in enabled drivers build config 00:00:39.431 net/pfe: not in enabled drivers build config 00:00:39.431 net/qede: not in enabled drivers build config 00:00:39.431 net/ring: not in enabled drivers build config 00:00:39.431 net/sfc: not in enabled drivers build config 00:00:39.431 net/softnic: not in enabled drivers build config 00:00:39.431 net/tap: not in enabled drivers build config 00:00:39.431 net/thunderx: not in enabled drivers build config 00:00:39.431 net/txgbe: not in enabled drivers build config 00:00:39.431 net/vdev_netvsc: not in enabled drivers build config 00:00:39.431 net/vhost: not in enabled drivers build config 00:00:39.431 net/virtio: not in enabled drivers build config 00:00:39.431 net/vmxnet3: not in enabled drivers build config 00:00:39.431 raw/cnxk_bphy: not in enabled drivers build config 00:00:39.431 raw/cnxk_gpio: not in enabled drivers build config 00:00:39.431 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:39.431 raw/ifpga: not in enabled drivers build config 00:00:39.431 raw/ntb: not in enabled drivers build config 00:00:39.431 raw/skeleton: not in enabled drivers build config 00:00:39.431 crypto/armv8: not in enabled drivers build config 00:00:39.431 crypto/bcmfs: not in enabled drivers build config 00:00:39.431 crypto/caam_jr: not in enabled drivers build config 00:00:39.431 crypto/ccp: not in enabled drivers build config 00:00:39.431 crypto/cnxk: not in enabled drivers build config 00:00:39.431 crypto/dpaa_sec: not in enabled drivers build config 00:00:39.431 crypto/dpaa2_sec: not in enabled drivers build config 00:00:39.431 crypto/ipsec_mb: not in enabled drivers build config 00:00:39.431 crypto/mlx5: not in enabled drivers build config 00:00:39.431 crypto/mvsam: not in enabled drivers build config 00:00:39.431 crypto/nitrox: not in enabled drivers build config 00:00:39.431 crypto/null: not in enabled drivers build config 00:00:39.431 crypto/octeontx: not in enabled drivers build config 00:00:39.431 crypto/openssl: not in enabled drivers build config 00:00:39.431 crypto/scheduler: not in enabled drivers build config 00:00:39.431 crypto/uadk: not in enabled drivers build config 00:00:39.431 crypto/virtio: not in enabled drivers build config 00:00:39.431 compress/isal: not in enabled drivers build config 00:00:39.431 compress/mlx5: not in enabled drivers build config 00:00:39.431 compress/octeontx: not in enabled drivers build config 00:00:39.431 compress/zlib: not in enabled drivers build config 00:00:39.431 regex/mlx5: not in enabled drivers build config 00:00:39.431 regex/cn9k: not in enabled drivers build config 00:00:39.431 vdpa/ifc: not in enabled drivers build config 00:00:39.431 vdpa/mlx5: not in enabled drivers build config 00:00:39.431 vdpa/sfc: not in enabled drivers build config 00:00:39.431 event/cnxk: not in enabled drivers build config 00:00:39.431 event/dlb2: not in enabled drivers build config 00:00:39.431 event/dpaa: not in enabled drivers build config 00:00:39.431 event/dpaa2: not in enabled drivers build config 00:00:39.431 event/dsw: not in enabled drivers build config 00:00:39.431 event/opdl: not in enabled drivers build config 00:00:39.431 event/skeleton: not in enabled drivers build config 00:00:39.431 event/sw: not in enabled drivers build config 00:00:39.431 event/octeontx: not in enabled drivers build config 00:00:39.431 baseband/acc: not in enabled drivers build config 00:00:39.431 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:39.431 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:39.431 baseband/la12xx: not in enabled drivers build config 00:00:39.431 baseband/null: not in enabled drivers build config 00:00:39.431 baseband/turbo_sw: not in enabled drivers build config 00:00:39.431 gpu/cuda: not in enabled drivers build config 00:00:39.431 00:00:39.431 00:00:39.431 Build targets in project: 311 00:00:39.431 00:00:39.431 DPDK 22.11.4 00:00:39.431 00:00:39.431 User defined options 00:00:39.431 libdir : lib 00:00:39.431 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:00:39.431 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:39.431 c_link_args : 00:00:39.431 enable_docs : false 00:00:39.431 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:39.431 enable_kmods : false 00:00:39.431 machine : native 00:00:39.431 tests : false 00:00:39.431 00:00:39.431 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:39.431 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:39.431 10:21:27 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j72 00:00:39.431 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:00:39.694 [1/740] Generating lib/rte_kvargs_def with a custom command 00:00:39.694 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:00:39.694 [3/740] Generating lib/rte_telemetry_def with a custom command 00:00:39.694 [4/740] Generating lib/rte_kvargs_mingw with a custom command 00:00:39.694 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:39.694 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:39.694 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:39.694 [8/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:39.694 [9/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:39.694 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:39.694 [11/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:00:39.694 [12/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:39.694 [13/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:39.694 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:39.694 [15/740] Generating lib/rte_eal_mingw with a custom command 00:00:39.694 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:39.694 [17/740] Generating lib/rte_eal_def with a custom command 00:00:39.694 [18/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:39.694 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:39.694 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:39.694 [21/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:39.694 [22/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:39.694 [23/740] Linking static target lib/librte_kvargs.a 00:00:39.694 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:39.694 [25/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:39.694 [26/740] Generating lib/rte_ring_def with a custom command 00:00:39.694 [27/740] Generating lib/rte_ring_mingw with a custom command 00:00:39.694 [28/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:39.694 [29/740] Generating lib/rte_rcu_def with a custom command 00:00:39.694 [30/740] Generating lib/rte_rcu_mingw with a custom command 00:00:39.694 [31/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:39.694 [32/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:39.694 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:39.694 [34/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:39.694 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:39.694 [36/740] Generating lib/rte_mempool_def with a custom command 00:00:39.694 [37/740] Generating lib/rte_mempool_mingw with a custom command 00:00:39.694 [38/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:39.694 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:39.694 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:39.694 [41/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:39.694 [42/740] Generating lib/rte_mbuf_def with a custom command 00:00:39.694 [43/740] Generating lib/rte_mbuf_mingw with a custom command 00:00:39.694 [44/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:39.694 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:40.025 [46/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:40.025 [47/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:40.025 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:40.026 [49/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:40.026 [50/740] Generating lib/rte_net_mingw with a custom command 00:00:40.026 [51/740] Generating lib/rte_net_def with a custom command 00:00:40.026 [52/740] Generating lib/rte_meter_def with a custom command 00:00:40.026 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:40.026 [54/740] Generating lib/rte_meter_mingw with a custom command 00:00:40.026 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:40.026 [56/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:40.026 [57/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:40.026 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:40.026 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:40.026 [60/740] Generating lib/rte_ethdev_mingw with a custom command 00:00:40.026 [61/740] Generating lib/rte_ethdev_def with a custom command 00:00:40.026 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:40.026 [63/740] Generating lib/rte_pci_def with a custom command 00:00:40.026 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:40.026 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:40.026 [66/740] Generating lib/rte_pci_mingw with a custom command 00:00:40.026 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:40.026 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:40.026 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:40.026 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:40.026 [71/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:40.026 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:40.026 [73/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:40.026 [74/740] Linking static target lib/librte_ring.a 00:00:40.026 [75/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:40.026 [76/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:00:40.026 [77/740] Generating lib/rte_cmdline_def with a custom command 00:00:40.026 [78/740] Generating lib/rte_cmdline_mingw with a custom command 00:00:40.026 [79/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:40.026 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:40.026 [81/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:40.026 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:40.026 [83/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:40.026 [84/740] Linking static target lib/librte_pci.a 00:00:40.026 [85/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:40.026 [86/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:40.026 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:40.026 [88/740] Generating lib/rte_metrics_def with a custom command 00:00:40.026 [89/740] Generating lib/rte_metrics_mingw with a custom command 00:00:40.026 [90/740] Linking static target lib/librte_meter.a 00:00:40.026 [91/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:40.026 [92/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:40.026 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:40.026 [94/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:40.026 [95/740] Generating lib/rte_hash_def with a custom command 00:00:40.026 [96/740] Generating lib/rte_hash_mingw with a custom command 00:00:40.026 [97/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:40.026 [98/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:40.026 [99/740] Generating lib/rte_timer_mingw with a custom command 00:00:40.026 [100/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:40.026 [101/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:40.026 [102/740] Generating lib/rte_timer_def with a custom command 00:00:40.026 [103/740] Generating lib/rte_acl_def with a custom command 00:00:40.026 [104/740] Generating lib/rte_acl_mingw with a custom command 00:00:40.026 [105/740] Generating lib/rte_bbdev_def with a custom command 00:00:40.026 [106/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:40.026 [107/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:40.026 [108/740] Generating lib/rte_bbdev_mingw with a custom command 00:00:40.288 [109/740] Generating lib/rte_bitratestats_def with a custom command 00:00:40.288 [110/740] Generating lib/rte_bitratestats_mingw with a custom command 00:00:40.288 [111/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:40.288 [112/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:40.288 [113/740] Linking target lib/librte_kvargs.so.23.0 00:00:40.288 [114/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:40.288 [115/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:40.288 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:40.288 [117/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:40.288 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:40.288 [119/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:40.288 [120/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:40.288 [121/740] Generating lib/rte_bpf_mingw with a custom command 00:00:40.288 [122/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:40.288 [123/740] Generating lib/rte_bpf_def with a custom command 00:00:40.288 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:40.288 [125/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:40.288 [126/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:40.288 [127/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:40.289 [128/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:40.289 [129/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:00:40.289 [130/740] Generating lib/rte_cfgfile_def with a custom command 00:00:40.289 [131/740] Generating lib/rte_cfgfile_mingw with a custom command 00:00:40.289 [132/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:40.289 [133/740] Generating lib/rte_compressdev_def with a custom command 00:00:40.289 [134/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:40.289 [135/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:40.289 [136/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:40.550 [137/740] Generating lib/rte_compressdev_mingw with a custom command 00:00:40.550 [138/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:40.550 [139/740] Generating lib/rte_cryptodev_def with a custom command 00:00:40.550 [140/740] Generating lib/rte_cryptodev_mingw with a custom command 00:00:40.550 [141/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:40.550 [142/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:40.550 [143/740] Generating lib/rte_distributor_def with a custom command 00:00:40.550 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:40.550 [145/740] Generating lib/rte_distributor_mingw with a custom command 00:00:40.550 [146/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:40.550 [147/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:40.550 [148/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:40.550 [149/740] Generating lib/rte_efd_def with a custom command 00:00:40.550 [150/740] Generating lib/rte_efd_mingw with a custom command 00:00:40.550 [151/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:40.550 [152/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:40.550 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:40.550 [154/740] Linking static target lib/librte_telemetry.a 00:00:40.550 [155/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:40.550 [156/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:40.550 [157/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:40.550 [158/740] Generating lib/rte_eventdev_mingw with a custom command 00:00:40.550 [159/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:40.550 [160/740] Generating lib/rte_eventdev_def with a custom command 00:00:40.550 [161/740] Generating lib/rte_gpudev_def with a custom command 00:00:40.550 [162/740] Linking static target lib/librte_cmdline.a 00:00:40.550 [163/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:40.550 [164/740] Generating lib/rte_gpudev_mingw with a custom command 00:00:40.550 [165/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:40.550 [166/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:40.550 [167/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:40.550 [168/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:40.550 [169/740] Generating lib/rte_gro_def with a custom command 00:00:40.550 [170/740] Linking static target lib/librte_net.a 00:00:40.550 [171/740] Generating lib/rte_gro_mingw with a custom command 00:00:40.550 [172/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:40.551 [173/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:40.551 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:40.551 [175/740] Linking static target lib/librte_metrics.a 00:00:40.551 [176/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:40.551 [177/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:40.551 [178/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:40.811 [179/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:40.811 [180/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:40.811 [181/740] Generating lib/rte_gso_def with a custom command 00:00:40.811 [182/740] Linking static target lib/librte_timer.a 00:00:40.811 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:40.811 [184/740] Generating lib/rte_gso_mingw with a custom command 00:00:40.811 [185/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:40.811 [186/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:40.811 [187/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:40.811 [188/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:40.811 [189/740] Linking static target lib/librte_cfgfile.a 00:00:40.811 [190/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:40.811 [191/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:40.811 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:40.812 [193/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:40.812 [194/740] Generating lib/rte_ip_frag_def with a custom command 00:00:40.812 [195/740] Generating lib/rte_ip_frag_mingw with a custom command 00:00:40.812 [196/740] Linking static target lib/librte_bitratestats.a 00:00:40.812 [197/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:40.812 [198/740] Generating lib/rte_jobstats_def with a custom command 00:00:40.812 [199/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:40.812 [200/740] Generating lib/rte_jobstats_mingw with a custom command 00:00:40.812 [201/740] Generating lib/rte_latencystats_def with a custom command 00:00:40.812 [202/740] Generating lib/rte_latencystats_mingw with a custom command 00:00:40.812 [203/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:40.812 [204/740] Generating lib/rte_lpm_def with a custom command 00:00:40.812 [205/740] Generating lib/rte_lpm_mingw with a custom command 00:00:41.133 [206/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:41.133 [207/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.133 [208/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:41.133 [209/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:41.133 [210/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:41.133 [211/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:41.133 [212/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:41.133 [213/740] Linking static target lib/librte_jobstats.a 00:00:41.133 [214/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.133 [215/740] Linking static target lib/librte_rcu.a 00:00:41.133 [216/740] Linking static target lib/librte_mempool.a 00:00:41.133 [217/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:41.133 [218/740] Generating lib/rte_member_mingw with a custom command 00:00:41.133 [219/740] Generating lib/rte_member_def with a custom command 00:00:41.133 [220/740] Generating lib/rte_pcapng_def with a custom command 00:00:41.133 [221/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:41.133 [222/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:41.133 [223/740] Linking target lib/librte_telemetry.so.23.0 00:00:41.133 [224/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:41.133 [225/740] Generating lib/rte_pcapng_mingw with a custom command 00:00:41.133 [226/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:41.133 [227/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.133 [228/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:41.133 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:41.133 [230/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:41.133 [231/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:41.133 [232/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:41.133 [233/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.133 [234/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.133 [235/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:41.133 [236/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:41.133 [237/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:41.133 [238/740] Linking static target lib/librte_bbdev.a 00:00:41.133 [239/740] Generating lib/rte_power_def with a custom command 00:00:41.440 [240/740] Generating lib/rte_power_mingw with a custom command 00:00:41.440 [241/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:41.440 [242/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:41.440 [243/740] Generating lib/rte_rawdev_def with a custom command 00:00:41.440 [244/740] Linking static target lib/librte_compressdev.a 00:00:41.440 [245/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:41.440 [246/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:41.440 [247/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:41.440 [248/740] Generating lib/rte_rawdev_mingw with a custom command 00:00:41.440 [249/740] Generating lib/rte_regexdev_def with a custom command 00:00:41.440 [250/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:41.440 [251/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.440 [252/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:41.440 [253/740] Generating lib/rte_regexdev_mingw with a custom command 00:00:41.440 [254/740] Generating lib/rte_dmadev_def with a custom command 00:00:41.440 [255/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:41.440 [256/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:00:41.440 [257/740] Generating lib/rte_dmadev_mingw with a custom command 00:00:41.440 [258/740] Generating lib/rte_rib_mingw with a custom command 00:00:41.440 [259/740] Generating lib/rte_rib_def with a custom command 00:00:41.440 [260/740] Generating lib/rte_reorder_def with a custom command 00:00:41.440 [261/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:41.440 [262/740] Generating lib/rte_reorder_mingw with a custom command 00:00:41.440 [263/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:41.440 [264/740] Generating lib/rte_sched_def with a custom command 00:00:41.440 [265/740] Generating lib/rte_sched_mingw with a custom command 00:00:41.440 [266/740] Generating lib/rte_security_def with a custom command 00:00:41.440 [267/740] Generating lib/rte_security_mingw with a custom command 00:00:41.440 [268/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:41.440 [269/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:41.440 [270/740] Linking static target lib/librte_eal.a 00:00:41.440 [271/740] Generating lib/rte_stack_mingw with a custom command 00:00:41.440 [272/740] Generating lib/rte_stack_def with a custom command 00:00:41.440 [273/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:41.440 [274/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:41.440 [275/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.440 [276/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:00:41.440 [277/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:41.440 [278/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:41.440 [279/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:41.440 [280/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:41.440 [281/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:41.440 [282/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:41.440 [283/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.440 [284/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:41.440 [285/740] Generating lib/rte_vhost_def with a custom command 00:00:41.440 [286/740] Generating lib/rte_vhost_mingw with a custom command 00:00:41.440 [287/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:41.440 [288/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:41.440 [289/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:41.440 [290/740] Linking static target lib/librte_stack.a 00:00:41.440 [291/740] Generating lib/rte_ipsec_mingw with a custom command 00:00:41.440 [292/740] Generating lib/rte_ipsec_def with a custom command 00:00:41.440 [293/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:41.704 [294/740] Linking static target lib/librte_gro.a 00:00:41.704 [295/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:41.704 [296/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:41.704 [297/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:41.704 [298/740] Linking static target lib/librte_gpudev.a 00:00:41.704 [299/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:41.704 [300/740] Generating lib/rte_fib_def with a custom command 00:00:41.704 [301/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:41.704 [302/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:41.704 [303/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:41.704 [304/740] Linking static target lib/librte_latencystats.a 00:00:41.704 [305/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:41.704 [306/740] Linking static target lib/librte_mbuf.a 00:00:41.704 [307/740] Linking static target lib/librte_distributor.a 00:00:41.704 [308/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:41.704 [309/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:41.704 [310/740] Linking static target lib/librte_gso.a 00:00:41.704 [311/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:41.704 [312/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:41.704 [313/740] Generating lib/rte_fib_mingw with a custom command 00:00:41.704 [314/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:41.704 [315/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:41.704 [316/740] Linking static target lib/librte_rawdev.a 00:00:41.704 [317/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:41.704 [318/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:41.704 [319/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.704 [320/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:41.970 [321/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:41.970 [322/740] Linking static target lib/librte_ip_frag.a 00:00:41.970 [323/740] Linking static target lib/librte_dmadev.a 00:00:41.970 [324/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:00:41.970 [325/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.970 [326/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:41.970 [327/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:41.970 [328/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.970 [329/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:41.970 [330/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:41.970 [331/740] Linking static target lib/librte_bpf.a 00:00:41.970 [332/740] Generating lib/rte_port_def with a custom command 00:00:41.970 [333/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:41.970 [334/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.970 [335/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:41.970 [336/740] Generating lib/rte_port_mingw with a custom command 00:00:41.970 [337/740] Generating lib/rte_pdump_mingw with a custom command 00:00:41.970 [338/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:42.233 [339/740] Generating lib/rte_pdump_def with a custom command 00:00:42.233 [340/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:42.233 [341/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:42.233 [342/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.233 [343/740] Linking static target lib/librte_regexdev.a 00:00:42.233 [344/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.233 [345/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:42.233 [346/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:42.233 [347/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:42.233 [348/740] Linking static target lib/librte_power.a 00:00:42.233 [349/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.233 [350/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:42.233 [351/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:42.233 [352/740] Linking static target lib/librte_pcapng.a 00:00:42.233 [353/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:42.233 [354/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:42.233 [355/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:42.233 [356/740] Linking static target lib/librte_reorder.a 00:00:42.233 [357/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:42.233 [358/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.233 [359/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:42.233 [360/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:42.233 [361/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:42.233 [362/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:42.496 [363/740] Linking static target lib/librte_security.a 00:00:42.496 [364/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:42.496 [365/740] Generating lib/rte_table_mingw with a custom command 00:00:42.496 [366/740] Generating lib/rte_table_def with a custom command 00:00:42.496 [367/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:00:42.496 [368/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:42.496 [369/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:00:42.496 [370/740] Linking static target lib/librte_efd.a 00:00:42.496 [371/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:42.496 [372/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.496 [373/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:42.496 [374/740] Linking static target lib/librte_lpm.a 00:00:42.496 [375/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.496 [376/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:42.496 [377/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:42.496 [378/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:42.496 [379/740] Generating lib/rte_pipeline_def with a custom command 00:00:42.496 [380/740] Generating lib/rte_pipeline_mingw with a custom command 00:00:42.496 [381/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:42.496 [382/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:42.496 [383/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.763 [384/740] Generating lib/rte_graph_def with a custom command 00:00:42.763 [385/740] Generating lib/rte_graph_mingw with a custom command 00:00:42.763 [386/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.763 [387/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:42.763 [388/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.763 [389/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.763 [390/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.763 [391/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:00:42.763 [392/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:42.763 [393/740] Linking static target lib/librte_rib.a 00:00:42.763 [394/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:42.763 [395/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:42.763 [396/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:42.763 [397/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:42.763 [398/740] Generating lib/rte_node_def with a custom command 00:00:42.763 [399/740] Generating lib/rte_node_mingw with a custom command 00:00:42.763 [400/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:43.027 [401/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:43.027 [402/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:43.027 [403/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:43.027 [404/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:43.027 [405/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:43.027 [406/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:00:43.027 [407/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:00:43.027 [408/740] Generating drivers/rte_bus_pci_def with a custom command 00:00:43.027 [409/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:43.027 [410/740] Generating drivers/rte_bus_vdev_def with a custom command 00:00:43.027 [411/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:43.027 [412/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:43.027 [413/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:00:43.027 [414/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:43.027 [415/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:00:43.027 [416/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:00:43.027 [417/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:43.027 [418/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:00:43.027 [419/740] Generating drivers/rte_mempool_ring_def with a custom command 00:00:43.027 [420/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:00:43.027 [421/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:43.027 [422/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:00:43.027 [423/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:00:43.027 [424/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:00:43.027 [425/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:00:43.027 [426/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:00:43.295 [427/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:00:43.295 [428/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:00:43.295 [429/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:43.295 [430/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:43.295 [431/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:00:43.295 [432/740] Linking static target lib/librte_fib.a 00:00:43.295 [433/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:00:43.295 [434/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:00:43.295 [435/740] Generating drivers/rte_net_i40e_def with a custom command 00:00:43.295 [436/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:00:43.295 [437/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:00:43.295 [438/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:00:43.295 [439/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:43.295 [440/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:00:43.295 [441/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:00:43.295 [442/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:00:43.295 [443/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:00:43.295 [444/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:00:43.295 [445/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:00:43.295 [446/740] Linking static target lib/librte_graph.a 00:00:43.295 [447/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:00:43.295 [448/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:00:43.295 [449/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:00:43.561 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:00:43.561 [451/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:00:43.561 [452/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:00:43.561 [453/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:00:43.561 [454/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:43.561 [455/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:00:43.561 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:43.561 [457/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:00:43.561 [458/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:00:43.561 [459/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:00:43.561 [460/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:43.561 [461/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:00:43.561 [462/740] Linking static target lib/librte_cryptodev.a 00:00:43.561 [463/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:00:43.561 [464/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:43.824 [465/740] Linking static target drivers/librte_bus_vdev.a 00:00:43.824 [466/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:43.824 [467/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:43.824 [468/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:00:43.824 [469/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:43.824 [470/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:00:43.824 [471/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:00:43.824 [472/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:00:43.824 [473/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:00:43.824 [474/740] Linking static target lib/librte_pdump.a 00:00:43.824 [475/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:00:43.824 [476/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:00:43.824 [477/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:00:43.824 [478/740] Linking static target lib/librte_ethdev.a 00:00:43.824 [479/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:00:43.824 [480/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:00:43.824 [481/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:00:44.087 [482/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:00:44.087 [483/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:44.087 [484/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:00:44.087 [485/740] Linking static target lib/librte_sched.a 00:00:44.087 [486/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:44.087 [487/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:44.087 [488/740] Linking static target drivers/librte_bus_pci.a 00:00:44.087 [489/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:00:44.087 [490/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:00:44.087 [491/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:00:44.087 [492/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:44.087 [493/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:00:44.087 [494/740] Linking static target lib/librte_table.a 00:00:44.087 [495/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:00:44.087 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:00:44.354 [497/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:00:44.354 [498/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:00:44.354 [499/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:00:44.354 [500/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:44.354 [501/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:00:44.354 [502/740] Linking static target lib/librte_member.a 00:00:44.354 [503/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:00:44.354 [504/740] Linking static target lib/librte_ipsec.a 00:00:44.354 [505/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:00:44.354 [506/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:00:44.354 [507/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:00:44.354 [508/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:00:44.354 [509/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:00:44.354 [510/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:00:44.354 [511/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:00:44.354 [512/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:44.617 [513/740] Linking static target lib/librte_hash.a 00:00:44.617 [514/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:00:44.617 [515/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:44.617 [516/740] Linking static target lib/librte_eventdev.a 00:00:44.617 [517/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:00:44.617 [518/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:00:44.617 [519/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:00:44.617 [520/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:00:44.617 [521/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:00:44.617 [522/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:00:44.617 [523/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:00:44.617 [524/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:00:44.617 [525/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:44.617 [526/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:00:44.617 [527/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:00:44.617 [528/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:00:44.617 [529/740] Linking static target lib/librte_node.a 00:00:44.617 [530/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:00:44.882 [531/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:00:44.882 [532/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:00:44.882 [533/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:44.882 [534/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:00:44.882 [535/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:00:44.882 [536/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:00:44.882 [537/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:00:44.882 [538/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:00:44.882 [539/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:00:44.882 [540/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:00:44.882 [541/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:00:44.882 [542/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:44.882 [543/740] Linking static target drivers/librte_mempool_ring.a 00:00:44.882 [544/740] Linking static target lib/librte_port.a 00:00:44.882 [545/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:44.882 [546/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:00:45.141 [547/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:00:45.141 [548/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:00:45.141 [549/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:00:45.141 [550/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:00:45.141 [551/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:00:45.141 [552/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:00:45.142 [553/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:00:45.142 [554/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:00:45.142 [555/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:00:45.142 [556/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:00:45.142 [557/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:00:45.142 [558/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:00:45.142 [559/740] Linking static target lib/librte_acl.a 00:00:45.142 [560/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:00:45.142 [561/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:00:45.142 [562/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:00:45.142 [563/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:00:45.142 [564/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:00:45.402 [565/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:00:45.402 [566/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:00:45.402 [567/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:00:45.402 [568/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:00:45.402 [569/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:00:45.402 [570/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:00:45.402 [571/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:00:45.402 [572/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:00:45.402 [573/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:00:45.402 [574/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:45.402 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:00:45.661 [576/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:45.661 [577/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:00:45.661 [578/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:00:45.661 [579/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:00:45.661 [580/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:00:45.661 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:00:45.661 [582/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:00:45.661 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:00:45.661 [584/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:00:45.661 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:00:45.661 [586/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:00:45.920 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:00:45.920 [588/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:00:45.920 [589/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:00:45.920 [590/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:00:45.920 [591/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:00:45.920 [592/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:00:45.920 [593/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:00:45.920 [594/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:00:45.920 [595/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:00:45.920 [596/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:00:45.920 [597/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:00:45.920 [598/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:00:45.920 [599/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:00:45.920 [600/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:00:45.920 [601/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:00:45.920 [602/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:00:45.920 [603/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:00:45.920 [604/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:00:45.920 [605/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:00:45.920 [606/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:00:45.920 [607/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:00:46.179 [608/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:00:46.179 [609/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:00:46.179 [610/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:00:46.179 [611/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:00:46.437 [612/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:00:46.696 [613/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:00:46.696 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:00:46.696 [615/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:00:46.954 [616/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:46.954 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:00:47.213 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:00:47.213 [619/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:47.213 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:00:47.471 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:00:47.471 [622/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:00:48.037 [623/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:00:48.037 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:00:48.037 [625/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:00:48.295 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:00:48.295 [627/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:48.295 [628/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:48.295 [629/740] Linking static target drivers/librte_net_i40e.a 00:00:48.861 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:00:48.861 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:00:49.119 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:00:49.376 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.905 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.436 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.436 [636/740] Linking target lib/librte_eal.so.23.0 00:00:54.436 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:00:54.436 [638/740] Linking target lib/librte_ring.so.23.0 00:00:54.436 [639/740] Linking target lib/librte_pci.so.23.0 00:00:54.436 [640/740] Linking target lib/librte_meter.so.23.0 00:00:54.436 [641/740] Linking target drivers/librte_bus_vdev.so.23.0 00:00:54.436 [642/740] Linking target lib/librte_stack.so.23.0 00:00:54.436 [643/740] Linking target lib/librte_jobstats.so.23.0 00:00:54.436 [644/740] Linking target lib/librte_timer.so.23.0 00:00:54.436 [645/740] Linking target lib/librte_cfgfile.so.23.0 00:00:54.436 [646/740] Linking target lib/librte_rawdev.so.23.0 00:00:54.436 [647/740] Linking target lib/librte_dmadev.so.23.0 00:00:54.436 [648/740] Linking target lib/librte_graph.so.23.0 00:00:54.436 [649/740] Linking target lib/librte_acl.so.23.0 00:00:54.436 [650/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:00:54.436 [651/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:00:54.436 [652/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:00:54.436 [653/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:00:54.436 [654/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:00:54.436 [655/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:00:54.436 [656/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:00:54.436 [657/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:00:54.436 [658/740] Linking target drivers/librte_bus_pci.so.23.0 00:00:54.436 [659/740] Linking target lib/librte_rcu.so.23.0 00:00:54.436 [660/740] Linking target lib/librte_mempool.so.23.0 00:00:54.694 [661/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:00:54.694 [662/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:00:54.694 [663/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:00:54.694 [664/740] Linking target drivers/librte_mempool_ring.so.23.0 00:00:54.694 [665/740] Linking target lib/librte_rib.so.23.0 00:00:54.694 [666/740] Linking target lib/librte_mbuf.so.23.0 00:00:54.694 [667/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:00:54.694 [668/740] Linking static target lib/librte_vhost.a 00:00:54.959 [669/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:00:54.959 [670/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:00:54.959 [671/740] Linking target lib/librte_reorder.so.23.0 00:00:54.959 [672/740] Linking target lib/librte_gpudev.so.23.0 00:00:54.959 [673/740] Linking target lib/librte_distributor.so.23.0 00:00:54.959 [674/740] Linking target lib/librte_cryptodev.so.23.0 00:00:54.960 [675/740] Linking target lib/librte_compressdev.so.23.0 00:00:54.960 [676/740] Linking target lib/librte_net.so.23.0 00:00:54.960 [677/740] Linking target lib/librte_bbdev.so.23.0 00:00:54.960 [678/740] Linking target lib/librte_regexdev.so.23.0 00:00:54.960 [679/740] Linking target lib/librte_sched.so.23.0 00:00:54.960 [680/740] Linking target lib/librte_fib.so.23.0 00:00:54.960 [681/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:00:54.960 [682/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:00:54.960 [683/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:00:55.219 [684/740] Linking target lib/librte_ethdev.so.23.0 00:00:55.219 [685/740] Linking target lib/librte_security.so.23.0 00:00:55.219 [686/740] Linking target lib/librte_hash.so.23.0 00:00:55.219 [687/740] Linking target lib/librte_cmdline.so.23.0 00:00:55.219 [688/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:00:55.219 [689/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:00:55.219 [690/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:00:55.219 [691/740] Linking target lib/librte_metrics.so.23.0 00:00:55.219 [692/740] Linking target lib/librte_gso.so.23.0 00:00:55.219 [693/740] Linking target lib/librte_ip_frag.so.23.0 00:00:55.219 [694/740] Linking target lib/librte_gro.so.23.0 00:00:55.219 [695/740] Linking target lib/librte_efd.so.23.0 00:00:55.219 [696/740] Linking target lib/librte_lpm.so.23.0 00:00:55.219 [697/740] Linking target lib/librte_pcapng.so.23.0 00:00:55.219 [698/740] Linking target lib/librte_eventdev.so.23.0 00:00:55.219 [699/740] Linking target lib/librte_bpf.so.23.0 00:00:55.219 [700/740] Linking target lib/librte_power.so.23.0 00:00:55.219 [701/740] Linking target lib/librte_member.so.23.0 00:00:55.219 [702/740] Linking target lib/librte_ipsec.so.23.0 00:00:55.477 [703/740] Linking target drivers/librte_net_i40e.so.23.0 00:00:55.477 [704/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:00:55.477 [705/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:00:55.477 [706/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:00:55.477 [707/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:00:55.477 [708/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:00:55.477 [709/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:00:55.477 [710/740] Linking target lib/librte_latencystats.so.23.0 00:00:55.477 [711/740] Linking target lib/librte_bitratestats.so.23.0 00:00:55.477 [712/740] Linking target lib/librte_node.so.23.0 00:00:55.477 [713/740] Linking target lib/librte_pdump.so.23.0 00:00:55.477 [714/740] Linking target lib/librte_port.so.23.0 00:00:55.735 [715/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:00:55.735 [716/740] Linking target lib/librte_table.so.23.0 00:00:55.993 [717/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:00:56.251 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:00:56.251 [719/740] Linking static target lib/librte_pipeline.a 00:00:56.509 [720/740] Linking target app/dpdk-pdump 00:00:56.509 [721/740] Linking target app/dpdk-test-acl 00:00:56.509 [722/740] Linking target app/dpdk-proc-info 00:00:56.509 [723/740] Linking target app/dpdk-dumpcap 00:00:56.509 [724/740] Linking target app/dpdk-test-crypto-perf 00:00:56.509 [725/740] Linking target app/dpdk-test-cmdline 00:00:56.509 [726/740] Linking target app/dpdk-test-fib 00:00:56.509 [727/740] Linking target app/dpdk-test-gpudev 00:00:56.509 [728/740] Linking target app/dpdk-test-pipeline 00:00:56.509 [729/740] Linking target app/dpdk-test-regex 00:00:56.509 [730/740] Linking target app/dpdk-test-compress-perf 00:00:56.509 [731/740] Linking target app/dpdk-test-flow-perf 00:00:56.509 [732/740] Linking target app/dpdk-test-security-perf 00:00:56.509 [733/740] Linking target app/dpdk-test-sad 00:00:56.509 [734/740] Linking target app/dpdk-test-eventdev 00:00:56.509 [735/740] Linking target app/dpdk-test-bbdev 00:00:56.768 [736/740] Linking target app/dpdk-testpmd 00:00:56.768 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.026 [738/740] Linking target lib/librte_vhost.so.23.0 00:01:01.245 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.504 [740/740] Linking target lib/librte_pipeline.so.23.0 00:01:01.504 10:21:49 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j72 install 00:01:01.504 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:01:01.504 [0/1] Installing files. 00:01:01.767 Installing subdir /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:01.767 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:01.768 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:01.769 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.770 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.771 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:01.772 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:01.773 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:01.773 Installing lib/librte_kvargs.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_telemetry.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_eal.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_rcu.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_mempool.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_mbuf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_net.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_meter.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_ethdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_cmdline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_metrics.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_hash.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_timer.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_acl.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_bbdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_bpf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_compressdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_distributor.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_efd.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_eventdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_gpudev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_gro.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_gso.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_jobstats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_latencystats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_lpm.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_member.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_pcapng.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_power.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_rawdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_regexdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_dmadev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:01.773 Installing lib/librte_rib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_reorder.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_sched.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_security.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_stack.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_vhost.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_ipsec.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_fib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_port.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_pdump.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.036 Installing lib/librte_table.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing lib/librte_pipeline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing lib/librte_graph.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing lib/librte_node.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:02.037 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:02.037 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:02.037 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.037 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:02.037 Installing app/dpdk-dumpcap to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-pdump to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-proc-info to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-acl to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-fib to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-testpmd to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-regex to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-sad to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.037 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:01:02.041 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:01:02.041 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:01:02.041 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:02.041 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:01:02.041 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:02.041 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:01:02.041 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:02.041 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:01:02.041 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:02.041 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:01:02.041 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:02.041 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:01:02.041 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:02.041 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:01:02.041 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:02.041 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so.23 00:01:02.041 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so 00:01:02.041 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:01:02.041 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:02.041 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:01:02.041 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:02.041 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:01:02.041 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:02.041 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:01:02.041 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:02.041 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:01:02.041 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:02.041 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:01:02.041 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:02.041 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:01:02.041 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:02.041 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:01:02.041 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:02.041 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:01:02.041 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:02.041 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:01:02.041 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:02.041 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:01:02.041 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:02.041 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:01:02.041 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:02.041 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:01:02.041 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:02.041 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:01:02.041 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:02.041 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:01:02.041 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:02.041 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:01:02.041 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:02.041 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:01:02.041 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:02.041 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:01:02.041 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:02.041 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:01:02.041 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:02.041 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:01:02.041 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:02.041 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:01:02.041 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:02.041 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:01:02.041 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:02.041 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:01:02.042 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:02.042 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:01:02.042 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:02.042 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so.23 00:01:02.042 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so 00:01:02.042 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:01:02.042 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:02.042 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so.23 00:01:02.042 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so 00:01:02.042 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:01:02.042 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:02.042 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:01:02.042 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:02.042 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:01:02.042 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:02.042 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:01:02.042 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:02.042 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:01:02.042 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:02.042 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:01:02.042 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:02.042 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so.23 00:01:02.042 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so 00:01:02.042 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:01:02.042 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:01:02.042 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:01:02.042 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:01:02.042 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:01:02.042 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:01:02.042 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:01:02.042 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:01:02.042 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:01:02.042 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:01:02.042 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:01:02.042 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:01:02.042 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:01:02.042 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:02.042 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:01:02.042 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:02.042 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:01:02.042 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:02.042 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:01:02.042 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:02.042 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so.23 00:01:02.042 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so 00:01:02.042 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:01:02.042 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:02.042 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so.23 00:01:02.042 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so 00:01:02.042 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:01:02.042 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:02.042 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:01:02.042 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:02.042 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so.23 00:01:02.042 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so 00:01:02.042 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:01:02.042 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:01:02.042 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:01:02.042 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:01:02.042 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:01:02.042 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:01:02.042 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:01:02.042 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:01:02.042 Running custom install script '/bin/sh /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:01:02.042 10:21:50 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:01:02.042 10:21:50 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:02.042 10:21:50 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:01:02.042 10:21:50 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:02.042 00:01:02.042 real 0m28.326s 00:01:02.042 user 7m0.796s 00:01:02.042 sys 1m45.408s 00:01:02.042 10:21:50 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:02.042 10:21:50 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:02.042 ************************************ 00:01:02.042 END TEST build_native_dpdk 00:01:02.042 ************************************ 00:01:02.301 10:21:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.301 10:21:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.301 10:21:50 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:02.301 10:21:50 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:02.301 10:21:50 -- common/autobuild_common.sh@428 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:02.301 10:21:50 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:02.301 10:21:50 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:02.301 10:21:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.301 ************************************ 00:01:02.301 START TEST autobuild_llvm_precompile 00:01:02.301 ************************************ 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autotest_common.sh@1121 -- $ _llvm_precompile 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:02.301 Target: x86_64-redhat-linux-gnu 00:01:02.301 Thread model: posix 00:01:02.301 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:02.301 10:21:50 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:02.582 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:02.582 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:02.582 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:02.840 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:03.098 Using 'verbs' RDMA provider 00:01:19.354 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:31.602 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:31.602 Creating mk/config.mk...done. 00:01:31.602 Creating mk/cc.flags.mk...done. 00:01:31.602 Type 'make' to build. 00:01:31.602 00:01:31.602 real 0m29.366s 00:01:31.602 user 0m12.674s 00:01:31.602 sys 0m16.025s 00:01:31.602 10:22:19 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:31.602 10:22:19 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:31.602 ************************************ 00:01:31.602 END TEST autobuild_llvm_precompile 00:01:31.602 ************************************ 00:01:31.602 10:22:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.602 10:22:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.602 10:22:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.602 10:22:19 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:31.602 10:22:19 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:31.884 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:31.884 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:01:31.884 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:01:32.175 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:32.442 Using 'verbs' RDMA provider 00:01:45.611 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:57.826 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:57.826 Creating mk/config.mk...done. 00:01:57.826 Creating mk/cc.flags.mk...done. 00:01:57.826 Type 'make' to build. 00:01:57.826 10:22:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:57.826 10:22:45 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:57.826 10:22:45 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:57.826 10:22:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.826 ************************************ 00:01:57.826 START TEST make 00:01:57.826 ************************************ 00:01:57.826 10:22:45 make -- common/autotest_common.sh@1121 -- $ make -j72 00:01:57.826 make[1]: Nothing to be done for 'all'. 00:01:59.210 The Meson build system 00:01:59.210 Version: 1.3.1 00:01:59.210 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:59.210 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:59.210 Build type: native build 00:01:59.210 Project name: libvfio-user 00:01:59.210 Project version: 0.0.1 00:01:59.210 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:59.210 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:59.210 Host machine cpu family: x86_64 00:01:59.210 Host machine cpu: x86_64 00:01:59.210 Run-time dependency threads found: YES 00:01:59.210 Library dl found: YES 00:01:59.210 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.210 Run-time dependency json-c found: YES 0.17 00:01:59.210 Run-time dependency cmocka found: YES 1.1.7 00:01:59.210 Program pytest-3 found: NO 00:01:59.210 Program flake8 found: NO 00:01:59.210 Program misspell-fixer found: NO 00:01:59.210 Program restructuredtext-lint found: NO 00:01:59.210 Program valgrind found: YES (/usr/bin/valgrind) 00:01:59.210 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.210 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.210 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.210 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:59.210 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:59.210 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:59.210 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:59.210 Build targets in project: 8 00:01:59.210 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:59.210 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:59.210 00:01:59.210 libvfio-user 0.0.1 00:01:59.210 00:01:59.210 User defined options 00:01:59.210 buildtype : debug 00:01:59.210 default_library: static 00:01:59.210 libdir : /usr/local/lib 00:01:59.210 00:01:59.210 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.469 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:59.728 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:59.728 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:59.728 [3/36] Compiling C object samples/null.p/null.c.o 00:01:59.728 [4/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:59.728 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:59.728 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:59.728 [7/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:59.728 [8/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:59.728 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:59.728 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:59.728 [11/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:59.728 [12/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:59.728 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:59.728 [14/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:59.728 [15/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:59.728 [16/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:59.728 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:59.728 [18/36] Compiling C object samples/server.p/server.c.o 00:01:59.728 [19/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:59.728 [20/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:59.728 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:59.728 [22/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:59.728 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:59.728 [24/36] Compiling C object samples/client.p/client.c.o 00:01:59.728 [25/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:59.728 [26/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:59.728 [27/36] Linking target samples/client 00:01:59.728 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:59.728 [29/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:59.728 [30/36] Linking static target lib/libvfio-user.a 00:01:59.728 [31/36] Linking target test/unit_tests 00:01:59.728 [32/36] Linking target samples/gpio-pci-idio-16 00:01:59.728 [33/36] Linking target samples/null 00:01:59.728 [34/36] Linking target samples/server 00:01:59.728 [35/36] Linking target samples/shadow_ioeventfd_server 00:01:59.728 [36/36] Linking target samples/lspci 00:01:59.728 INFO: autodetecting backend as ninja 00:01:59.728 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:59.986 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:00.245 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:00.245 ninja: no work to do. 00:02:04.434 CC lib/log/log.o 00:02:04.434 CC lib/log/log_flags.o 00:02:04.434 CC lib/log/log_deprecated.o 00:02:04.434 CC lib/ut_mock/mock.o 00:02:04.434 CC lib/ut/ut.o 00:02:04.434 LIB libspdk_log.a 00:02:04.434 LIB libspdk_ut_mock.a 00:02:04.434 LIB libspdk_ut.a 00:02:04.434 CC lib/ioat/ioat.o 00:02:04.434 CC lib/util/cpuset.o 00:02:04.434 CC lib/util/base64.o 00:02:04.434 CC lib/util/bit_array.o 00:02:04.434 CXX lib/trace_parser/trace.o 00:02:04.434 CC lib/util/crc16.o 00:02:04.434 CC lib/util/crc32c.o 00:02:04.434 CC lib/util/crc64.o 00:02:04.434 CC lib/util/crc32.o 00:02:04.434 CC lib/util/crc32_ieee.o 00:02:04.434 CC lib/util/dif.o 00:02:04.434 CC lib/util/fd.o 00:02:04.434 CC lib/util/file.o 00:02:04.434 CC lib/util/hexlify.o 00:02:04.434 CC lib/util/iov.o 00:02:04.434 CC lib/util/math.o 00:02:04.434 CC lib/util/pipe.o 00:02:04.434 CC lib/util/string.o 00:02:04.434 CC lib/util/uuid.o 00:02:04.434 CC lib/util/strerror_tls.o 00:02:04.434 CC lib/dma/dma.o 00:02:04.434 CC lib/util/fd_group.o 00:02:04.434 CC lib/util/xor.o 00:02:04.434 CC lib/util/zipf.o 00:02:04.693 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.693 CC lib/vfio_user/host/vfio_user.o 00:02:04.693 LIB libspdk_dma.a 00:02:04.693 LIB libspdk_ioat.a 00:02:04.951 LIB libspdk_vfio_user.a 00:02:04.951 LIB libspdk_util.a 00:02:04.951 LIB libspdk_trace_parser.a 00:02:05.209 CC lib/idxd/idxd.o 00:02:05.209 CC lib/idxd/idxd_user.o 00:02:05.209 CC lib/json/json_util.o 00:02:05.209 CC lib/env_dpdk/env.o 00:02:05.210 CC lib/json/json_parse.o 00:02:05.210 CC lib/idxd/idxd_kernel.o 00:02:05.210 CC lib/env_dpdk/memory.o 00:02:05.210 CC lib/env_dpdk/pci.o 00:02:05.210 CC lib/env_dpdk/init.o 00:02:05.210 CC lib/json/json_write.o 00:02:05.210 CC lib/env_dpdk/pci_ioat.o 00:02:05.210 CC lib/vmd/vmd.o 00:02:05.210 CC lib/env_dpdk/threads.o 00:02:05.210 CC lib/vmd/led.o 00:02:05.210 CC lib/env_dpdk/pci_virtio.o 00:02:05.210 CC lib/env_dpdk/pci_vmd.o 00:02:05.210 CC lib/env_dpdk/pci_idxd.o 00:02:05.210 CC lib/rdma/common.o 00:02:05.210 CC lib/rdma/rdma_verbs.o 00:02:05.210 CC lib/env_dpdk/pci_event.o 00:02:05.210 CC lib/env_dpdk/sigbus_handler.o 00:02:05.210 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.210 CC lib/env_dpdk/pci_dpdk.o 00:02:05.210 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.210 CC lib/conf/conf.o 00:02:05.468 LIB libspdk_conf.a 00:02:05.468 LIB libspdk_rdma.a 00:02:05.468 LIB libspdk_json.a 00:02:05.468 LIB libspdk_idxd.a 00:02:05.468 LIB libspdk_vmd.a 00:02:05.725 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.725 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.725 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.725 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.983 LIB libspdk_jsonrpc.a 00:02:06.242 LIB libspdk_env_dpdk.a 00:02:06.242 CC lib/rpc/rpc.o 00:02:06.500 LIB libspdk_rpc.a 00:02:06.758 CC lib/notify/notify_rpc.o 00:02:06.758 CC lib/notify/notify.o 00:02:06.758 CC lib/keyring/keyring_rpc.o 00:02:06.758 CC lib/keyring/keyring.o 00:02:06.758 CC lib/trace/trace.o 00:02:06.758 CC lib/trace/trace_flags.o 00:02:06.758 CC lib/trace/trace_rpc.o 00:02:06.758 LIB libspdk_notify.a 00:02:07.016 LIB libspdk_keyring.a 00:02:07.016 LIB libspdk_trace.a 00:02:07.275 CC lib/thread/thread.o 00:02:07.275 CC lib/thread/iobuf.o 00:02:07.275 CC lib/sock/sock.o 00:02:07.275 CC lib/sock/sock_rpc.o 00:02:07.534 LIB libspdk_sock.a 00:02:07.792 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.792 CC lib/nvme/nvme_ns_cmd.o 00:02:07.792 CC lib/nvme/nvme_ctrlr.o 00:02:07.792 CC lib/nvme/nvme_ns.o 00:02:07.792 CC lib/nvme/nvme_fabric.o 00:02:07.792 CC lib/nvme/nvme_pcie_common.o 00:02:07.792 CC lib/nvme/nvme_pcie.o 00:02:07.792 CC lib/nvme/nvme_qpair.o 00:02:07.792 CC lib/nvme/nvme.o 00:02:07.792 CC lib/nvme/nvme_quirks.o 00:02:07.792 CC lib/nvme/nvme_transport.o 00:02:07.792 CC lib/nvme/nvme_discovery.o 00:02:07.792 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.792 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.792 CC lib/nvme/nvme_tcp.o 00:02:07.792 CC lib/nvme/nvme_opal.o 00:02:07.792 CC lib/nvme/nvme_io_msg.o 00:02:07.792 CC lib/nvme/nvme_poll_group.o 00:02:07.792 CC lib/nvme/nvme_zns.o 00:02:07.792 CC lib/nvme/nvme_stubs.o 00:02:07.792 CC lib/nvme/nvme_auth.o 00:02:07.792 CC lib/nvme/nvme_cuse.o 00:02:07.792 CC lib/nvme/nvme_vfio_user.o 00:02:07.792 CC lib/nvme/nvme_rdma.o 00:02:08.051 LIB libspdk_thread.a 00:02:08.310 CC lib/accel/accel.o 00:02:08.310 CC lib/accel/accel_rpc.o 00:02:08.310 CC lib/accel/accel_sw.o 00:02:08.310 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.310 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.310 CC lib/blob/blobstore.o 00:02:08.310 CC lib/blob/zeroes.o 00:02:08.310 CC lib/blob/request.o 00:02:08.310 CC lib/blob/blob_bs_dev.o 00:02:08.310 CC lib/virtio/virtio.o 00:02:08.310 CC lib/virtio/virtio_vfio_user.o 00:02:08.310 CC lib/virtio/virtio_vhost_user.o 00:02:08.310 CC lib/virtio/virtio_pci.o 00:02:08.310 CC lib/init/subsystem.o 00:02:08.310 CC lib/init/json_config.o 00:02:08.310 CC lib/init/subsystem_rpc.o 00:02:08.310 CC lib/init/rpc.o 00:02:08.569 LIB libspdk_init.a 00:02:08.569 LIB libspdk_vfu_tgt.a 00:02:08.569 LIB libspdk_virtio.a 00:02:08.828 CC lib/event/app.o 00:02:08.828 CC lib/event/reactor.o 00:02:08.828 CC lib/event/log_rpc.o 00:02:08.828 CC lib/event/app_rpc.o 00:02:08.828 CC lib/event/scheduler_static.o 00:02:09.087 LIB libspdk_accel.a 00:02:09.087 LIB libspdk_event.a 00:02:09.087 LIB libspdk_nvme.a 00:02:09.346 CC lib/bdev/bdev.o 00:02:09.346 CC lib/bdev/bdev_rpc.o 00:02:09.346 CC lib/bdev/bdev_zone.o 00:02:09.346 CC lib/bdev/part.o 00:02:09.346 CC lib/bdev/scsi_nvme.o 00:02:10.282 LIB libspdk_blob.a 00:02:10.542 CC lib/lvol/lvol.o 00:02:10.542 CC lib/blobfs/blobfs.o 00:02:10.542 CC lib/blobfs/tree.o 00:02:11.111 LIB libspdk_lvol.a 00:02:11.111 LIB libspdk_blobfs.a 00:02:11.111 LIB libspdk_bdev.a 00:02:11.374 CC lib/scsi/dev.o 00:02:11.374 CC lib/scsi/port.o 00:02:11.374 CC lib/scsi/lun.o 00:02:11.374 CC lib/scsi/scsi_bdev.o 00:02:11.374 CC lib/scsi/scsi.o 00:02:11.374 CC lib/nbd/nbd_rpc.o 00:02:11.374 CC lib/scsi/scsi_pr.o 00:02:11.374 CC lib/nbd/nbd.o 00:02:11.374 CC lib/ublk/ublk_rpc.o 00:02:11.374 CC lib/scsi/scsi_rpc.o 00:02:11.374 CC lib/scsi/task.o 00:02:11.374 CC lib/ublk/ublk.o 00:02:11.374 CC lib/ftl/ftl_core.o 00:02:11.374 CC lib/ftl/ftl_init.o 00:02:11.374 CC lib/ftl/ftl_layout.o 00:02:11.374 CC lib/ftl/ftl_debug.o 00:02:11.374 CC lib/ftl/ftl_io.o 00:02:11.374 CC lib/ftl/ftl_sb.o 00:02:11.374 CC lib/ftl/ftl_l2p_flat.o 00:02:11.374 CC lib/ftl/ftl_l2p.o 00:02:11.374 CC lib/ftl/ftl_nv_cache.o 00:02:11.374 CC lib/nvmf/ctrlr.o 00:02:11.374 CC lib/nvmf/ctrlr_discovery.o 00:02:11.374 CC lib/ftl/ftl_band_ops.o 00:02:11.375 CC lib/nvmf/ctrlr_bdev.o 00:02:11.375 CC lib/ftl/ftl_band.o 00:02:11.375 CC lib/nvmf/subsystem.o 00:02:11.375 CC lib/nvmf/nvmf.o 00:02:11.375 CC lib/nvmf/transport.o 00:02:11.375 CC lib/ftl/ftl_writer.o 00:02:11.375 CC lib/nvmf/nvmf_rpc.o 00:02:11.375 CC lib/ftl/ftl_rq.o 00:02:11.375 CC lib/ftl/ftl_reloc.o 00:02:11.375 CC lib/ftl/ftl_l2p_cache.o 00:02:11.375 CC lib/nvmf/tcp.o 00:02:11.375 CC lib/nvmf/mdns_server.o 00:02:11.375 CC lib/ftl/ftl_p2l.o 00:02:11.375 CC lib/nvmf/stubs.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:11.375 CC lib/nvmf/rdma.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:11.375 CC lib/nvmf/vfio_user.o 00:02:11.375 CC lib/nvmf/auth.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.375 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.375 CC lib/ftl/utils/ftl_md.o 00:02:11.375 CC lib/ftl/utils/ftl_conf.o 00:02:11.375 CC lib/ftl/utils/ftl_mempool.o 00:02:11.375 CC lib/ftl/utils/ftl_bitmap.o 00:02:11.375 CC lib/ftl/utils/ftl_property.o 00:02:11.375 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:11.375 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:11.375 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:11.375 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:11.375 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:11.375 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:11.375 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:11.375 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:11.375 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:11.375 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:11.375 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:11.375 CC lib/ftl/base/ftl_base_dev.o 00:02:11.375 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.634 CC lib/ftl/ftl_trace.o 00:02:11.892 LIB libspdk_nbd.a 00:02:11.892 LIB libspdk_scsi.a 00:02:11.892 LIB libspdk_ublk.a 00:02:12.150 LIB libspdk_ftl.a 00:02:12.408 CC lib/iscsi/conn.o 00:02:12.408 CC lib/iscsi/init_grp.o 00:02:12.408 CC lib/iscsi/iscsi.o 00:02:12.408 CC lib/iscsi/md5.o 00:02:12.408 CC lib/iscsi/param.o 00:02:12.408 CC lib/iscsi/portal_grp.o 00:02:12.408 CC lib/iscsi/iscsi_rpc.o 00:02:12.408 CC lib/vhost/vhost.o 00:02:12.408 CC lib/iscsi/tgt_node.o 00:02:12.408 CC lib/iscsi/task.o 00:02:12.408 CC lib/vhost/vhost_rpc.o 00:02:12.408 CC lib/iscsi/iscsi_subsystem.o 00:02:12.408 CC lib/vhost/vhost_scsi.o 00:02:12.408 CC lib/vhost/vhost_blk.o 00:02:12.408 CC lib/vhost/rte_vhost_user.o 00:02:12.667 LIB libspdk_nvmf.a 00:02:12.925 LIB libspdk_vhost.a 00:02:13.183 LIB libspdk_iscsi.a 00:02:13.442 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.442 CC module/vfu_device/vfu_virtio.o 00:02:13.442 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.442 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.442 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.700 CC module/accel/dsa/accel_dsa.o 00:02:13.700 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.700 CC module/sock/posix/posix.o 00:02:13.700 CC module/accel/error/accel_error_rpc.o 00:02:13.700 CC module/accel/ioat/accel_ioat.o 00:02:13.700 CC module/accel/error/accel_error.o 00:02:13.700 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.701 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.701 LIB libspdk_env_dpdk_rpc.a 00:02:13.701 CC module/keyring/linux/keyring.o 00:02:13.701 CC module/keyring/linux/keyring_rpc.o 00:02:13.701 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.701 CC module/accel/iaa/accel_iaa.o 00:02:13.701 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.701 CC module/keyring/file/keyring.o 00:02:13.701 CC module/keyring/file/keyring_rpc.o 00:02:13.701 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.701 CC module/blob/bdev/blob_bdev.o 00:02:13.701 LIB libspdk_scheduler_gscheduler.a 00:02:13.701 LIB libspdk_keyring_linux.a 00:02:13.701 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.701 LIB libspdk_accel_error.a 00:02:13.701 LIB libspdk_keyring_file.a 00:02:13.701 LIB libspdk_accel_ioat.a 00:02:13.701 LIB libspdk_accel_iaa.a 00:02:13.961 LIB libspdk_scheduler_dynamic.a 00:02:13.961 LIB libspdk_accel_dsa.a 00:02:13.961 LIB libspdk_blob_bdev.a 00:02:13.961 LIB libspdk_vfu_device.a 00:02:14.220 LIB libspdk_sock_posix.a 00:02:14.220 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.220 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.220 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.220 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.220 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.220 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.220 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.220 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.220 CC module/bdev/split/vbdev_split.o 00:02:14.220 CC module/bdev/raid/bdev_raid.o 00:02:14.220 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.220 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.220 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.220 CC module/bdev/ftl/bdev_ftl.o 00:02:14.220 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.220 CC module/bdev/malloc/bdev_malloc.o 00:02:14.220 CC module/bdev/raid/raid0.o 00:02:14.220 CC module/bdev/raid/raid1.o 00:02:14.220 CC module/bdev/delay/vbdev_delay.o 00:02:14.220 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.220 CC module/bdev/raid/concat.o 00:02:14.220 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.220 CC module/bdev/gpt/gpt.o 00:02:14.220 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.220 CC module/bdev/null/bdev_null_rpc.o 00:02:14.220 CC module/bdev/null/bdev_null.o 00:02:14.220 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.220 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.220 CC module/bdev/error/vbdev_error.o 00:02:14.220 CC module/bdev/aio/bdev_aio.o 00:02:14.220 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.220 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.220 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.220 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.220 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.220 CC module/bdev/nvme/bdev_nvme.o 00:02:14.220 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.220 CC module/bdev/nvme/nvme_rpc.o 00:02:14.220 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.220 CC module/bdev/nvme/vbdev_opal.o 00:02:14.220 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.220 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.479 LIB libspdk_blobfs_bdev.a 00:02:14.479 LIB libspdk_bdev_ftl.a 00:02:14.479 LIB libspdk_bdev_null.a 00:02:14.479 LIB libspdk_bdev_passthru.a 00:02:14.479 LIB libspdk_bdev_error.a 00:02:14.479 LIB libspdk_bdev_zone_block.a 00:02:14.479 LIB libspdk_bdev_split.a 00:02:14.479 LIB libspdk_bdev_delay.a 00:02:14.738 LIB libspdk_bdev_gpt.a 00:02:14.738 LIB libspdk_bdev_lvol.a 00:02:14.738 LIB libspdk_bdev_iscsi.a 00:02:14.738 LIB libspdk_bdev_malloc.a 00:02:14.738 LIB libspdk_bdev_aio.a 00:02:14.738 LIB libspdk_bdev_virtio.a 00:02:15.002 LIB libspdk_bdev_raid.a 00:02:15.941 LIB libspdk_bdev_nvme.a 00:02:16.510 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.510 CC module/event/subsystems/iobuf/iobuf.o 00:02:16.510 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:16.510 CC module/event/subsystems/scheduler/scheduler.o 00:02:16.510 CC module/event/subsystems/keyring/keyring.o 00:02:16.510 CC module/event/subsystems/sock/sock.o 00:02:16.510 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:16.510 CC module/event/subsystems/vmd/vmd.o 00:02:16.510 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.510 LIB libspdk_event_vhost_blk.a 00:02:16.510 LIB libspdk_event_keyring.a 00:02:16.510 LIB libspdk_event_sock.a 00:02:16.510 LIB libspdk_event_scheduler.a 00:02:16.510 LIB libspdk_event_vmd.a 00:02:16.510 LIB libspdk_event_vfu_tgt.a 00:02:16.510 LIB libspdk_event_iobuf.a 00:02:16.770 CC module/event/subsystems/accel/accel.o 00:02:16.770 LIB libspdk_event_accel.a 00:02:17.338 CC module/event/subsystems/bdev/bdev.o 00:02:17.338 LIB libspdk_event_bdev.a 00:02:17.624 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.624 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.624 CC module/event/subsystems/scsi/scsi.o 00:02:17.624 CC module/event/subsystems/nbd/nbd.o 00:02:17.624 CC module/event/subsystems/ublk/ublk.o 00:02:17.967 LIB libspdk_event_scsi.a 00:02:17.967 LIB libspdk_event_nbd.a 00:02:17.967 LIB libspdk_event_ublk.a 00:02:17.967 LIB libspdk_event_nvmf.a 00:02:18.225 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.225 CC module/event/subsystems/iscsi/iscsi.o 00:02:18.225 LIB libspdk_event_vhost_scsi.a 00:02:18.225 LIB libspdk_event_iscsi.a 00:02:18.491 CXX app/trace/trace.o 00:02:18.491 CC app/trace_record/trace_record.o 00:02:18.491 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.491 CC app/spdk_nvme_perf/perf.o 00:02:18.491 CC app/spdk_lspci/spdk_lspci.o 00:02:18.491 CC app/spdk_nvme_identify/identify.o 00:02:18.491 CC app/spdk_top/spdk_top.o 00:02:18.491 TEST_HEADER include/spdk/accel.h 00:02:18.491 TEST_HEADER include/spdk/accel_module.h 00:02:18.491 TEST_HEADER include/spdk/assert.h 00:02:18.491 TEST_HEADER include/spdk/barrier.h 00:02:18.491 TEST_HEADER include/spdk/bdev.h 00:02:18.491 TEST_HEADER include/spdk/base64.h 00:02:18.491 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.491 TEST_HEADER include/spdk/bdev_module.h 00:02:18.491 TEST_HEADER include/spdk/bit_array.h 00:02:18.491 TEST_HEADER include/spdk/bit_pool.h 00:02:18.491 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.491 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.491 TEST_HEADER include/spdk/blobfs.h 00:02:18.491 TEST_HEADER include/spdk/blob.h 00:02:18.491 CC test/rpc_client/rpc_client_test.o 00:02:18.491 TEST_HEADER include/spdk/conf.h 00:02:18.491 TEST_HEADER include/spdk/config.h 00:02:18.491 TEST_HEADER include/spdk/cpuset.h 00:02:18.491 TEST_HEADER include/spdk/crc16.h 00:02:18.491 TEST_HEADER include/spdk/crc32.h 00:02:18.491 TEST_HEADER include/spdk/crc64.h 00:02:18.491 TEST_HEADER include/spdk/dif.h 00:02:18.491 TEST_HEADER include/spdk/dma.h 00:02:18.491 TEST_HEADER include/spdk/endian.h 00:02:18.491 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.491 TEST_HEADER include/spdk/env.h 00:02:18.491 TEST_HEADER include/spdk/event.h 00:02:18.491 TEST_HEADER include/spdk/fd_group.h 00:02:18.491 TEST_HEADER include/spdk/fd.h 00:02:18.491 TEST_HEADER include/spdk/file.h 00:02:18.491 TEST_HEADER include/spdk/ftl.h 00:02:18.491 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.491 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.491 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.491 TEST_HEADER include/spdk/hexlify.h 00:02:18.491 TEST_HEADER include/spdk/histogram_data.h 00:02:18.756 TEST_HEADER include/spdk/idxd.h 00:02:18.756 CC app/spdk_dd/spdk_dd.o 00:02:18.756 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.756 TEST_HEADER include/spdk/init.h 00:02:18.756 TEST_HEADER include/spdk/ioat.h 00:02:18.756 CC app/nvmf_tgt/nvmf_main.o 00:02:18.756 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.756 CC app/vhost/vhost.o 00:02:18.756 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.756 TEST_HEADER include/spdk/json.h 00:02:18.756 CC test/app/jsoncat/jsoncat.o 00:02:18.756 CC test/app/histogram_perf/histogram_perf.o 00:02:18.756 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.756 TEST_HEADER include/spdk/keyring.h 00:02:18.756 TEST_HEADER include/spdk/keyring_module.h 00:02:18.756 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:18.756 TEST_HEADER include/spdk/likely.h 00:02:18.756 TEST_HEADER include/spdk/log.h 00:02:18.756 CC test/event/reactor/reactor.o 00:02:18.756 CC test/app/stub/stub.o 00:02:18.756 TEST_HEADER include/spdk/lvol.h 00:02:18.756 CC test/nvme/sgl/sgl.o 00:02:18.756 CC test/nvme/e2edp/nvme_dp.o 00:02:18.756 TEST_HEADER include/spdk/memory.h 00:02:18.756 CC test/nvme/aer/aer.o 00:02:18.756 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.756 TEST_HEADER include/spdk/mmio.h 00:02:18.756 CC test/env/pci/pci_ut.o 00:02:18.756 CC examples/ioat/perf/perf.o 00:02:18.756 CC test/nvme/reserve/reserve.o 00:02:18.756 TEST_HEADER include/spdk/nbd.h 00:02:18.756 CC app/spdk_tgt/spdk_tgt.o 00:02:18.756 CC test/event/reactor_perf/reactor_perf.o 00:02:18.756 CC test/thread/poller_perf/poller_perf.o 00:02:18.756 CC test/nvme/connect_stress/connect_stress.o 00:02:18.756 CC test/env/vtophys/vtophys.o 00:02:18.756 TEST_HEADER include/spdk/notify.h 00:02:18.756 CC test/thread/lock/spdk_lock.o 00:02:18.756 CC test/nvme/overhead/overhead.o 00:02:18.756 CC test/event/event_perf/event_perf.o 00:02:18.756 TEST_HEADER include/spdk/nvme.h 00:02:18.756 CC test/nvme/reset/reset.o 00:02:18.756 CC test/nvme/boot_partition/boot_partition.o 00:02:18.756 CC examples/accel/perf/accel_perf.o 00:02:18.756 CC test/nvme/startup/startup.o 00:02:18.756 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.756 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:18.756 CC test/nvme/fused_ordering/fused_ordering.o 00:02:18.756 CC examples/sock/hello_world/hello_sock.o 00:02:18.756 CC test/env/memory/memory_ut.o 00:02:18.756 CC examples/vmd/led/led.o 00:02:18.756 CC test/nvme/compliance/nvme_compliance.o 00:02:18.756 CC examples/ioat/verify/verify.o 00:02:18.756 CC test/nvme/simple_copy/simple_copy.o 00:02:18.756 CC app/fio/nvme/fio_plugin.o 00:02:18.756 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.756 CC examples/nvme/reconnect/reconnect.o 00:02:18.756 CC test/nvme/err_injection/err_injection.o 00:02:18.756 CC test/nvme/cuse/cuse.o 00:02:18.756 CC examples/idxd/perf/perf.o 00:02:18.756 CC examples/nvme/hello_world/hello_world.o 00:02:18.756 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.756 CC examples/util/zipf/zipf.o 00:02:18.756 CC test/event/app_repeat/app_repeat.o 00:02:18.756 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:18.756 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.756 CC test/nvme/fdp/fdp.o 00:02:18.756 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.756 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.756 CC test/accel/dif/dif.o 00:02:18.756 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.756 TEST_HEADER include/spdk/nvmf.h 00:02:18.756 CC test/dma/test_dma/test_dma.o 00:02:18.756 CC test/app/bdev_svc/bdev_svc.o 00:02:18.756 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.756 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.756 LINK spdk_lspci 00:02:18.756 TEST_HEADER include/spdk/opal.h 00:02:18.756 CC test/blobfs/mkfs/mkfs.o 00:02:18.756 TEST_HEADER include/spdk/opal_spec.h 00:02:18.756 TEST_HEADER include/spdk/pci_ids.h 00:02:18.756 TEST_HEADER include/spdk/pipe.h 00:02:18.756 CC test/event/scheduler/scheduler.o 00:02:18.756 CC examples/thread/thread/thread_ex.o 00:02:18.756 CC examples/blob/cli/blobcli.o 00:02:18.756 TEST_HEADER include/spdk/queue.h 00:02:18.756 CC test/bdev/bdevio/bdevio.o 00:02:18.756 TEST_HEADER include/spdk/reduce.h 00:02:18.756 CC app/fio/bdev/fio_plugin.o 00:02:18.756 TEST_HEADER include/spdk/rpc.h 00:02:18.756 CC examples/blob/hello_world/hello_blob.o 00:02:18.756 CC examples/nvmf/nvmf/nvmf.o 00:02:18.756 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.756 CC examples/bdev/hello_world/hello_bdev.o 00:02:18.756 TEST_HEADER include/spdk/scheduler.h 00:02:18.756 TEST_HEADER include/spdk/scsi.h 00:02:18.756 CC examples/bdev/bdevperf/bdevperf.o 00:02:18.756 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.756 TEST_HEADER include/spdk/scsi_spec.h 00:02:18.756 TEST_HEADER include/spdk/sock.h 00:02:18.756 TEST_HEADER include/spdk/stdinc.h 00:02:18.756 TEST_HEADER include/spdk/string.h 00:02:18.756 CC test/lvol/esnap/esnap.o 00:02:18.756 TEST_HEADER include/spdk/thread.h 00:02:18.756 TEST_HEADER include/spdk/trace.h 00:02:18.756 TEST_HEADER include/spdk/trace_parser.h 00:02:18.756 TEST_HEADER include/spdk/tree.h 00:02:18.756 TEST_HEADER include/spdk/ublk.h 00:02:18.756 LINK rpc_client_test 00:02:18.756 TEST_HEADER include/spdk/util.h 00:02:18.756 TEST_HEADER include/spdk/uuid.h 00:02:18.756 TEST_HEADER include/spdk/version.h 00:02:18.756 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:18.756 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:18.756 TEST_HEADER include/spdk/vhost.h 00:02:18.756 LINK spdk_nvme_discover 00:02:18.756 TEST_HEADER include/spdk/vmd.h 00:02:18.756 TEST_HEADER include/spdk/xor.h 00:02:18.756 LINK jsoncat 00:02:18.756 TEST_HEADER include/spdk/zipf.h 00:02:18.756 CXX test/cpp_headers/accel.o 00:02:18.756 LINK histogram_perf 00:02:19.016 LINK reactor 00:02:19.016 LINK lsvmd 00:02:19.016 LINK interrupt_tgt 00:02:19.016 LINK spdk_trace_record 00:02:19.016 LINK env_dpdk_post_init 00:02:19.016 LINK reactor_perf 00:02:19.016 LINK iscsi_tgt 00:02:19.016 LINK event_perf 00:02:19.016 LINK poller_perf 00:02:19.016 LINK vhost 00:02:19.016 LINK vtophys 00:02:19.016 LINK led 00:02:19.016 LINK nvmf_tgt 00:02:19.016 LINK zipf 00:02:19.016 LINK app_repeat 00:02:19.016 LINK stub 00:02:19.016 LINK startup 00:02:19.016 LINK connect_stress 00:02:19.016 LINK boot_partition 00:02:19.016 LINK doorbell_aers 00:02:19.016 LINK err_injection 00:02:19.016 LINK reserve 00:02:19.016 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:19.016 struct spdk_nvme_fdp_ruhs ruhs; 00:02:19.016 ^ 00:02:19.016 LINK fused_ordering 00:02:19.016 LINK bdev_svc 00:02:19.016 LINK ioat_perf 00:02:19.016 LINK verify 00:02:19.016 LINK spdk_tgt 00:02:19.016 LINK hello_world 00:02:19.016 LINK mkfs 00:02:19.016 LINK hello_sock 00:02:19.016 LINK simple_copy 00:02:19.016 LINK mem_callbacks 00:02:19.016 LINK nvme_dp 00:02:19.016 LINK sgl 00:02:19.016 LINK scheduler 00:02:19.016 LINK spdk_trace 00:02:19.016 LINK overhead 00:02:19.016 LINK reset 00:02:19.016 LINK fdp 00:02:19.016 LINK thread 00:02:19.016 LINK hello_blob 00:02:19.016 LINK aer 00:02:19.016 CXX test/cpp_headers/accel_module.o 00:02:19.016 LINK hello_bdev 00:02:19.280 LINK nvmf 00:02:19.281 LINK idxd_perf 00:02:19.281 LINK reconnect 00:02:19.281 LINK test_dma 00:02:19.281 LINK bdevio 00:02:19.281 LINK spdk_dd 00:02:19.281 LINK pci_ut 00:02:19.281 LINK nvme_manage 00:02:19.281 LINK accel_perf 00:02:19.281 LINK nvme_compliance 00:02:19.281 CXX test/cpp_headers/assert.o 00:02:19.281 LINK dif 00:02:19.541 LINK nvme_fuzz 00:02:19.541 1 warning generated. 00:02:19.542 LINK blobcli 00:02:19.542 LINK spdk_nvme_identify 00:02:19.542 LINK spdk_bdev 00:02:19.542 LINK spdk_nvme 00:02:19.542 CXX test/cpp_headers/barrier.o 00:02:19.803 LINK spdk_nvme_perf 00:02:19.803 CXX test/cpp_headers/base64.o 00:02:19.803 CC examples/nvme/arbitration/arbitration.o 00:02:19.803 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:19.803 CXX test/cpp_headers/bdev.o 00:02:19.803 CXX test/cpp_headers/bdev_module.o 00:02:19.803 CC examples/nvme/hotplug/hotplug.o 00:02:19.803 LINK bdevperf 00:02:19.803 LINK memory_ut 00:02:19.803 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.803 LINK spdk_top 00:02:19.803 CXX test/cpp_headers/bdev_zone.o 00:02:19.803 CXX test/cpp_headers/bit_array.o 00:02:19.803 CC examples/nvme/abort/abort.o 00:02:19.803 CXX test/cpp_headers/bit_pool.o 00:02:19.803 CXX test/cpp_headers/blob_bdev.o 00:02:19.803 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.065 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:20.065 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.065 CXX test/cpp_headers/blobfs.o 00:02:20.065 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:20.065 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:20.065 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.065 CXX test/cpp_headers/blob.o 00:02:20.065 CXX test/cpp_headers/conf.o 00:02:20.065 LINK hotplug 00:02:20.065 CXX test/cpp_headers/config.o 00:02:20.065 CXX test/cpp_headers/cpuset.o 00:02:20.065 LINK cmb_copy 00:02:20.065 CXX test/cpp_headers/crc16.o 00:02:20.065 CXX test/cpp_headers/crc32.o 00:02:20.065 CXX test/cpp_headers/crc64.o 00:02:20.065 CXX test/cpp_headers/dif.o 00:02:20.065 CXX test/cpp_headers/dma.o 00:02:20.325 CXX test/cpp_headers/endian.o 00:02:20.326 CXX test/cpp_headers/env_dpdk.o 00:02:20.326 CXX test/cpp_headers/env.o 00:02:20.326 LINK arbitration 00:02:20.326 LINK pmr_persistence 00:02:20.326 CXX test/cpp_headers/event.o 00:02:20.326 CXX test/cpp_headers/fd_group.o 00:02:20.326 CXX test/cpp_headers/fd.o 00:02:20.326 CXX test/cpp_headers/file.o 00:02:20.326 CXX test/cpp_headers/ftl.o 00:02:20.326 CXX test/cpp_headers/gpt_spec.o 00:02:20.326 CXX test/cpp_headers/hexlify.o 00:02:20.326 CXX test/cpp_headers/histogram_data.o 00:02:20.326 CXX test/cpp_headers/idxd.o 00:02:20.326 LINK llvm_vfio_fuzz 00:02:20.326 CXX test/cpp_headers/idxd_spec.o 00:02:20.326 CXX test/cpp_headers/init.o 00:02:20.326 LINK abort 00:02:20.326 CXX test/cpp_headers/ioat.o 00:02:20.326 CXX test/cpp_headers/ioat_spec.o 00:02:20.326 CXX test/cpp_headers/iscsi_spec.o 00:02:20.326 CXX test/cpp_headers/json.o 00:02:20.326 CXX test/cpp_headers/jsonrpc.o 00:02:20.326 CXX test/cpp_headers/keyring.o 00:02:20.326 CXX test/cpp_headers/keyring_module.o 00:02:20.326 CXX test/cpp_headers/likely.o 00:02:20.595 CXX test/cpp_headers/log.o 00:02:20.595 CXX test/cpp_headers/lvol.o 00:02:20.595 CXX test/cpp_headers/memory.o 00:02:20.595 LINK cuse 00:02:20.595 CXX test/cpp_headers/mmio.o 00:02:20.595 CXX test/cpp_headers/nbd.o 00:02:20.595 CXX test/cpp_headers/notify.o 00:02:20.595 CXX test/cpp_headers/nvme.o 00:02:20.595 CXX test/cpp_headers/nvme_intel.o 00:02:20.595 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.595 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.595 CXX test/cpp_headers/nvme_spec.o 00:02:20.595 CXX test/cpp_headers/nvme_zns.o 00:02:20.595 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.595 CXX test/cpp_headers/nvmf.o 00:02:20.595 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.595 CXX test/cpp_headers/nvmf_spec.o 00:02:20.595 LINK vhost_fuzz 00:02:20.595 CXX test/cpp_headers/nvmf_transport.o 00:02:20.595 CXX test/cpp_headers/opal.o 00:02:20.595 CXX test/cpp_headers/opal_spec.o 00:02:20.595 CXX test/cpp_headers/pci_ids.o 00:02:20.595 CXX test/cpp_headers/queue.o 00:02:20.595 CXX test/cpp_headers/pipe.o 00:02:20.595 CXX test/cpp_headers/reduce.o 00:02:20.595 CXX test/cpp_headers/rpc.o 00:02:20.595 CXX test/cpp_headers/scheduler.o 00:02:20.595 CXX test/cpp_headers/scsi.o 00:02:20.595 CXX test/cpp_headers/scsi_spec.o 00:02:20.595 CXX test/cpp_headers/sock.o 00:02:20.595 CXX test/cpp_headers/stdinc.o 00:02:20.595 CXX test/cpp_headers/string.o 00:02:20.595 CXX test/cpp_headers/thread.o 00:02:20.595 CXX test/cpp_headers/trace.o 00:02:20.595 CXX test/cpp_headers/trace_parser.o 00:02:20.595 CXX test/cpp_headers/tree.o 00:02:20.595 CXX test/cpp_headers/ublk.o 00:02:20.595 CXX test/cpp_headers/util.o 00:02:20.595 CXX test/cpp_headers/uuid.o 00:02:20.595 CXX test/cpp_headers/version.o 00:02:20.595 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.595 CXX test/cpp_headers/vhost.o 00:02:20.595 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.595 CXX test/cpp_headers/vmd.o 00:02:20.595 CXX test/cpp_headers/xor.o 00:02:20.854 CXX test/cpp_headers/zipf.o 00:02:20.854 LINK spdk_lock 00:02:20.854 LINK llvm_nvme_fuzz 00:02:21.420 LINK iscsi_fuzz 00:02:23.322 LINK esnap 00:02:23.888 00:02:23.888 real 0m26.598s 00:02:23.888 user 5m18.039s 00:02:23.888 sys 1m49.476s 00:02:23.888 10:23:12 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:23.888 10:23:12 make -- common/autotest_common.sh@10 -- $ set +x 00:02:23.888 ************************************ 00:02:23.888 END TEST make 00:02:23.888 ************************************ 00:02:23.888 10:23:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:23.888 10:23:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:23.888 10:23:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:23.888 10:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.888 10:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:23.888 10:23:12 -- pm/common@44 -- $ pid=3298102 00:02:23.888 10:23:12 -- pm/common@50 -- $ kill -TERM 3298102 00:02:23.888 10:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.888 10:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:23.888 10:23:12 -- pm/common@44 -- $ pid=3298104 00:02:23.888 10:23:12 -- pm/common@50 -- $ kill -TERM 3298104 00:02:23.888 10:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.888 10:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:23.888 10:23:12 -- pm/common@44 -- $ pid=3298105 00:02:23.888 10:23:12 -- pm/common@50 -- $ kill -TERM 3298105 00:02:23.888 10:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.888 10:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:23.888 10:23:12 -- pm/common@44 -- $ pid=3298129 00:02:23.888 10:23:12 -- pm/common@50 -- $ sudo -E kill -TERM 3298129 00:02:23.888 10:23:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.888 10:23:12 -- nvmf/common.sh@7 -- # uname -s 00:02:23.888 10:23:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.888 10:23:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.888 10:23:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.888 10:23:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.888 10:23:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.888 10:23:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.888 10:23:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.888 10:23:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.888 10:23:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.888 10:23:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.888 10:23:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:02:23.888 10:23:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:02:23.888 10:23:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.888 10:23:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.888 10:23:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:23.888 10:23:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.888 10:23:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:23.888 10:23:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.888 10:23:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.888 10:23:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.888 10:23:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.888 10:23:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.888 10:23:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.888 10:23:12 -- paths/export.sh@5 -- # export PATH 00:02:23.888 10:23:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.888 10:23:12 -- nvmf/common.sh@47 -- # : 0 00:02:23.888 10:23:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:23.888 10:23:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:23.888 10:23:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.888 10:23:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.888 10:23:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.888 10:23:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:23.888 10:23:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:23.888 10:23:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:23.888 10:23:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.888 10:23:12 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.888 10:23:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.888 10:23:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.888 10:23:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:23.888 10:23:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.888 10:23:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:23.888 10:23:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.888 10:23:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.888 10:23:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.888 10:23:12 -- spdk/autotest.sh@48 -- # udevadm_pid=3369261 00:02:23.888 10:23:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.888 10:23:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.888 10:23:12 -- pm/common@17 -- # local monitor 00:02:23.888 10:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.888 10:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.888 10:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.889 10:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.889 10:23:12 -- pm/common@21 -- # date +%s 00:02:23.889 10:23:12 -- pm/common@21 -- # date +%s 00:02:23.889 10:23:12 -- pm/common@21 -- # date +%s 00:02:23.889 10:23:12 -- pm/common@25 -- # sleep 1 00:02:23.889 10:23:12 -- pm/common@21 -- # date +%s 00:02:23.889 10:23:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721722992 00:02:23.889 10:23:12 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721722992 00:02:23.889 10:23:12 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721722992 00:02:23.889 10:23:12 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721722992 00:02:23.889 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721722992_collect-vmstat.pm.log 00:02:23.889 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721722992_collect-cpu-load.pm.log 00:02:23.889 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721722992_collect-cpu-temp.pm.log 00:02:23.889 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721722992_collect-bmc-pm.bmc.pm.log 00:02:24.825 10:23:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:25.084 10:23:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:25.084 10:23:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:25.084 10:23:13 -- common/autotest_common.sh@10 -- # set +x 00:02:25.084 10:23:13 -- spdk/autotest.sh@59 -- # create_test_list 00:02:25.084 10:23:13 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:25.084 10:23:13 -- common/autotest_common.sh@10 -- # set +x 00:02:25.084 10:23:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:25.084 10:23:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:25.084 10:23:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:25.084 10:23:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:25.084 10:23:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:25.084 10:23:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:25.084 10:23:13 -- common/autotest_common.sh@1451 -- # uname 00:02:25.084 10:23:13 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:25.084 10:23:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:25.084 10:23:13 -- common/autotest_common.sh@1471 -- # uname 00:02:25.084 10:23:13 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:25.084 10:23:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:25.084 10:23:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:25.084 10:23:13 -- spdk/autotest.sh@72 -- # hash lcov 00:02:25.084 10:23:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:25.084 10:23:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:25.084 10:23:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:25.084 10:23:13 -- common/autotest_common.sh@10 -- # set +x 00:02:25.084 10:23:13 -- spdk/autotest.sh@91 -- # rm -f 00:02:25.084 10:23:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:28.373 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:02:28.373 0000:af:00.0 (8086 2701): Already using the nvme driver 00:02:28.373 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:28.373 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:28.373 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:28.373 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:28.373 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:28.373 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:b0:00.0 (8086 2701): Already using the nvme driver 00:02:28.632 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:28.632 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:28.891 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:28.891 10:23:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:28.891 10:23:17 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:28.891 10:23:17 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:28.891 10:23:17 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:28.891 10:23:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:28.891 10:23:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:28.891 10:23:17 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:28.891 10:23:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:28.891 10:23:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:28.891 10:23:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:28.891 10:23:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:02:28.891 10:23:17 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:02:28.891 10:23:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:28.891 10:23:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:28.891 10:23:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:28.891 10:23:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:02:28.891 10:23:17 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:02:28.891 10:23:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:28.891 10:23:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:28.891 10:23:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:28.891 10:23:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:28.891 10:23:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:28.891 10:23:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:28.891 10:23:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:28.891 10:23:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:28.891 No valid GPT data, bailing 00:02:28.891 10:23:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:28.891 10:23:17 -- scripts/common.sh@391 -- # pt= 00:02:28.891 10:23:17 -- scripts/common.sh@392 -- # return 1 00:02:28.891 10:23:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:28.891 1+0 records in 00:02:28.891 1+0 records out 00:02:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.001838 s, 570 MB/s 00:02:28.891 10:23:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:28.891 10:23:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:28.891 10:23:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:28.891 10:23:17 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:28.891 10:23:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:28.891 No valid GPT data, bailing 00:02:28.891 10:23:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:28.891 10:23:17 -- scripts/common.sh@391 -- # pt= 00:02:28.891 10:23:17 -- scripts/common.sh@392 -- # return 1 00:02:28.891 10:23:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:28.891 1+0 records in 00:02:28.891 1+0 records out 00:02:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440999 s, 238 MB/s 00:02:28.891 10:23:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:28.891 10:23:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:28.891 10:23:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:28.891 10:23:17 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:28.891 10:23:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:28.891 No valid GPT data, bailing 00:02:28.891 10:23:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:28.891 10:23:17 -- scripts/common.sh@391 -- # pt= 00:02:28.891 10:23:17 -- scripts/common.sh@392 -- # return 1 00:02:28.891 10:23:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:28.891 1+0 records in 00:02:28.891 1+0 records out 00:02:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427144 s, 245 MB/s 00:02:28.891 10:23:17 -- spdk/autotest.sh@118 -- # sync 00:02:28.891 10:23:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:28.891 10:23:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:28.891 10:23:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:34.166 10:23:22 -- spdk/autotest.sh@124 -- # uname -s 00:02:34.166 10:23:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:34.166 10:23:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.166 10:23:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.166 10:23:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.166 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:02:34.166 ************************************ 00:02:34.166 START TEST setup.sh 00:02:34.166 ************************************ 00:02:34.166 10:23:22 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.166 * Looking for test storage... 00:02:34.166 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:34.166 10:23:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:34.166 10:23:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:34.166 10:23:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:34.166 10:23:22 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.166 10:23:22 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.166 10:23:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:34.166 ************************************ 00:02:34.166 START TEST acl 00:02:34.166 ************************************ 00:02:34.166 10:23:22 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:34.426 * Looking for test storage... 00:02:34.426 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:34.426 10:23:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:34.426 10:23:22 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:34.426 10:23:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:34.426 10:23:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:34.426 10:23:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:34.426 10:23:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:34.426 10:23:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:34.426 10:23:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:34.426 10:23:22 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.625 10:23:26 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:38.625 10:23:26 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:38.625 10:23:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.625 10:23:26 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:38.625 10:23:26 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.625 10:23:26 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:41.916 Hugepages 00:02:41.916 node hugesize free / total 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 00:02:41.916 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.916 10:23:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:af:00.0 == *:*:*.* ]] 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:b0:00.0 == *:*:*.* ]] 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:02:41.916 10:23:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:41.916 10:23:30 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.916 10:23:30 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.916 10:23:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.916 ************************************ 00:02:41.916 START TEST denied 00:02:41.916 ************************************ 00:02:41.916 10:23:30 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:41.916 10:23:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:41.916 10:23:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:41.916 10:23:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:41.916 10:23:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.916 10:23:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:46.108 0000:5e:00.0 (144d a80a): Skipping denied controller at 0000:5e:00.0 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.108 10:23:33 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.302 00:02:50.302 real 0m7.892s 00:02:50.302 user 0m2.358s 00:02:50.302 sys 0m4.643s 00:02:50.302 10:23:38 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:50.302 10:23:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:50.302 ************************************ 00:02:50.302 END TEST denied 00:02:50.302 ************************************ 00:02:50.302 10:23:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:50.302 10:23:38 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:50.302 10:23:38 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:50.302 10:23:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:50.302 ************************************ 00:02:50.302 START TEST allowed 00:02:50.302 ************************************ 00:02:50.302 10:23:38 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:50.302 10:23:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:50.302 10:23:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:50.302 10:23:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:50.302 10:23:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.302 10:23:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:55.579 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:af:00.0 0000:b0:00.0 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:af:00.0 ]] 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/driver 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:b0:00.0 ]] 00:02:55.579 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:b0:00.0/driver 00:02:55.580 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:55.580 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:55.580 10:23:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:55.580 10:23:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.580 10:23:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.771 00:02:59.771 real 0m9.329s 00:02:59.771 user 0m2.592s 00:02:59.771 sys 0m5.133s 00:02:59.771 10:23:47 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:59.771 10:23:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:59.771 ************************************ 00:02:59.771 END TEST allowed 00:02:59.771 ************************************ 00:02:59.771 00:02:59.771 real 0m24.932s 00:02:59.771 user 0m7.632s 00:02:59.771 sys 0m14.982s 00:02:59.771 10:23:47 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:59.771 10:23:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.771 ************************************ 00:02:59.771 END TEST acl 00:02:59.771 ************************************ 00:02:59.771 10:23:47 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.771 10:23:47 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:59.771 10:23:47 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:59.771 10:23:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.771 ************************************ 00:02:59.771 START TEST hugepages 00:02:59.771 ************************************ 00:02:59.771 10:23:47 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.771 * Looking for test storage... 00:02:59.771 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 36166164 kB' 'MemAvailable: 40130996 kB' 'Buffers: 2704 kB' 'Cached: 16673392 kB' 'SwapCached: 0 kB' 'Active: 13590852 kB' 'Inactive: 3640756 kB' 'Active(anon): 13082424 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559332 kB' 'Mapped: 162316 kB' 'Shmem: 12526912 kB' 'KReclaimable: 463360 kB' 'Slab: 919260 kB' 'SReclaimable: 463360 kB' 'SUnreclaim: 455900 kB' 'KernelStack: 16240 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439176 kB' 'Committed_AS: 14452392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203348 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.771 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.772 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:59.773 10:23:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:59.773 10:23:47 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:59.773 10:23:47 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:59.773 10:23:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.773 ************************************ 00:02:59.773 START TEST default_setup 00:02:59.773 ************************************ 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.773 10:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:03.071 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:03:03.071 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:03.071 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.071 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:03:03.071 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38393816 kB' 'MemAvailable: 42358224 kB' 'Buffers: 2704 kB' 'Cached: 16673492 kB' 'SwapCached: 0 kB' 'Active: 13611668 kB' 'Inactive: 3640756 kB' 'Active(anon): 13103240 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578916 kB' 'Mapped: 163372 kB' 'Shmem: 12527012 kB' 'KReclaimable: 462936 kB' 'Slab: 917296 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454360 kB' 'KernelStack: 16672 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14478460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203364 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.418 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.419 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38386540 kB' 'MemAvailable: 42350948 kB' 'Buffers: 2704 kB' 'Cached: 16673492 kB' 'SwapCached: 0 kB' 'Active: 13614392 kB' 'Inactive: 3640756 kB' 'Active(anon): 13105964 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581672 kB' 'Mapped: 163332 kB' 'Shmem: 12527012 kB' 'KReclaimable: 462936 kB' 'Slab: 917296 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454360 kB' 'KernelStack: 16624 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14480732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203384 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.420 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.421 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38385464 kB' 'MemAvailable: 42349872 kB' 'Buffers: 2704 kB' 'Cached: 16673516 kB' 'SwapCached: 0 kB' 'Active: 13607612 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099184 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575316 kB' 'Mapped: 163248 kB' 'Shmem: 12527036 kB' 'KReclaimable: 462936 kB' 'Slab: 917048 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454112 kB' 'KernelStack: 16576 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14474636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203396 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.422 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.423 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.424 nr_hugepages=1024 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.424 resv_hugepages=0 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.424 surplus_hugepages=0 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.424 anon_hugepages=0 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38386364 kB' 'MemAvailable: 42350772 kB' 'Buffers: 2704 kB' 'Cached: 16673536 kB' 'SwapCached: 0 kB' 'Active: 13607476 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099048 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575204 kB' 'Mapped: 162752 kB' 'Shmem: 12527056 kB' 'KReclaimable: 462936 kB' 'Slab: 917048 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454112 kB' 'KernelStack: 16352 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14474656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203316 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.424 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.425 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 22253684 kB' 'MemUsed: 10380284 kB' 'SwapCached: 0 kB' 'Active: 6552032 kB' 'Inactive: 279304 kB' 'Active(anon): 6391636 kB' 'Inactive(anon): 0 kB' 'Active(file): 160396 kB' 'Inactive(file): 279304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6698252 kB' 'Mapped: 67640 kB' 'AnonPages: 135688 kB' 'Shmem: 6258552 kB' 'KernelStack: 9528 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128256 kB' 'Slab: 389528 kB' 'SReclaimable: 128256 kB' 'SUnreclaim: 261272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.426 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:03.427 node0=1024 expecting 1024 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:03.427 00:03:03.427 real 0m3.958s 00:03:03.427 user 0m1.486s 00:03:03.427 sys 0m2.527s 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:03.427 10:23:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:03.427 ************************************ 00:03:03.427 END TEST default_setup 00:03:03.427 ************************************ 00:03:03.427 10:23:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:03.427 10:23:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:03.427 10:23:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:03.427 10:23:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.427 ************************************ 00:03:03.427 START TEST per_node_1G_alloc 00:03:03.427 ************************************ 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.427 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.428 10:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:06.751 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:06.751 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:06.751 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:06.751 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.751 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38408608 kB' 'MemAvailable: 42373016 kB' 'Buffers: 2704 kB' 'Cached: 16673624 kB' 'SwapCached: 0 kB' 'Active: 13608248 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099820 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574952 kB' 'Mapped: 162952 kB' 'Shmem: 12527144 kB' 'KReclaimable: 462936 kB' 'Slab: 917484 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454548 kB' 'KernelStack: 16464 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14472648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203556 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.751 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.752 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38410284 kB' 'MemAvailable: 42374692 kB' 'Buffers: 2704 kB' 'Cached: 16673628 kB' 'SwapCached: 0 kB' 'Active: 13608000 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099572 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575260 kB' 'Mapped: 162896 kB' 'Shmem: 12527148 kB' 'KReclaimable: 462936 kB' 'Slab: 917476 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454540 kB' 'KernelStack: 16448 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14472664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203524 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.753 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.754 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38409936 kB' 'MemAvailable: 42374344 kB' 'Buffers: 2704 kB' 'Cached: 16673644 kB' 'SwapCached: 0 kB' 'Active: 13607848 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099420 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575528 kB' 'Mapped: 162784 kB' 'Shmem: 12527164 kB' 'KReclaimable: 462936 kB' 'Slab: 917452 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454516 kB' 'KernelStack: 16448 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14472688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203524 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.755 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.756 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.757 nr_hugepages=1024 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.757 resv_hugepages=0 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.757 surplus_hugepages=0 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.757 anon_hugepages=0 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38410552 kB' 'MemAvailable: 42374960 kB' 'Buffers: 2704 kB' 'Cached: 16673668 kB' 'SwapCached: 0 kB' 'Active: 13607564 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099136 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575200 kB' 'Mapped: 162784 kB' 'Shmem: 12527188 kB' 'KReclaimable: 462936 kB' 'Slab: 917452 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454516 kB' 'KernelStack: 16432 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14472712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203524 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.757 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.758 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23302348 kB' 'MemUsed: 9331620 kB' 'SwapCached: 0 kB' 'Active: 6552328 kB' 'Inactive: 279304 kB' 'Active(anon): 6391932 kB' 'Inactive(anon): 0 kB' 'Active(file): 160396 kB' 'Inactive(file): 279304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6698364 kB' 'Mapped: 67656 kB' 'AnonPages: 136456 kB' 'Shmem: 6258664 kB' 'KernelStack: 9624 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128256 kB' 'Slab: 389716 kB' 'SReclaimable: 128256 kB' 'SUnreclaim: 261460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.759 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.760 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661484 kB' 'MemFree: 15106188 kB' 'MemUsed: 12555296 kB' 'SwapCached: 0 kB' 'Active: 7057932 kB' 'Inactive: 3361452 kB' 'Active(anon): 6709900 kB' 'Inactive(anon): 0 kB' 'Active(file): 348032 kB' 'Inactive(file): 3361452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9978032 kB' 'Mapped: 95632 kB' 'AnonPages: 441432 kB' 'Shmem: 6268548 kB' 'KernelStack: 6808 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334680 kB' 'Slab: 527736 kB' 'SReclaimable: 334680 kB' 'SUnreclaim: 193056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.761 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:06.762 node0=512 expecting 512 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:06.762 node1=512 expecting 512 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:06.762 00:03:06.762 real 0m3.031s 00:03:06.762 user 0m1.009s 00:03:06.762 sys 0m1.904s 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:06.762 10:23:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:06.762 ************************************ 00:03:06.762 END TEST per_node_1G_alloc 00:03:06.762 ************************************ 00:03:06.762 10:23:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:06.762 10:23:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:06.762 10:23:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:06.762 10:23:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.762 ************************************ 00:03:06.762 START TEST even_2G_alloc 00:03:06.762 ************************************ 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.763 10:23:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:10.059 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:10.059 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:10.059 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:10.059 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.059 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.059 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:10.059 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.059 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.059 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.059 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.059 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.059 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38441292 kB' 'MemAvailable: 42405700 kB' 'Buffers: 2704 kB' 'Cached: 16673780 kB' 'SwapCached: 0 kB' 'Active: 13606748 kB' 'Inactive: 3640756 kB' 'Active(anon): 13098320 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573828 kB' 'Mapped: 161800 kB' 'Shmem: 12527300 kB' 'KReclaimable: 462936 kB' 'Slab: 917584 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454648 kB' 'KernelStack: 16416 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14463872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203444 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.060 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38443244 kB' 'MemAvailable: 42407652 kB' 'Buffers: 2704 kB' 'Cached: 16673780 kB' 'SwapCached: 0 kB' 'Active: 13606796 kB' 'Inactive: 3640756 kB' 'Active(anon): 13098368 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573988 kB' 'Mapped: 161732 kB' 'Shmem: 12527300 kB' 'KReclaimable: 462936 kB' 'Slab: 917584 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454648 kB' 'KernelStack: 16304 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14463744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203428 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.061 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.062 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38444708 kB' 'MemAvailable: 42409116 kB' 'Buffers: 2704 kB' 'Cached: 16673800 kB' 'SwapCached: 0 kB' 'Active: 13605488 kB' 'Inactive: 3640756 kB' 'Active(anon): 13097060 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572900 kB' 'Mapped: 161652 kB' 'Shmem: 12527320 kB' 'KReclaimable: 462936 kB' 'Slab: 917528 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454592 kB' 'KernelStack: 16432 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14465132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203492 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.063 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.064 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.065 nr_hugepages=1024 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.065 resv_hugepages=0 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.065 surplus_hugepages=0 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.065 anon_hugepages=0 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.065 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38445348 kB' 'MemAvailable: 42409756 kB' 'Buffers: 2704 kB' 'Cached: 16673800 kB' 'SwapCached: 0 kB' 'Active: 13605856 kB' 'Inactive: 3640756 kB' 'Active(anon): 13097428 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573300 kB' 'Mapped: 161652 kB' 'Shmem: 12527320 kB' 'KReclaimable: 462936 kB' 'Slab: 917528 kB' 'SReclaimable: 462936 kB' 'SUnreclaim: 454592 kB' 'KernelStack: 16480 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14465152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203524 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.066 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23312536 kB' 'MemUsed: 9321432 kB' 'SwapCached: 0 kB' 'Active: 6550636 kB' 'Inactive: 279304 kB' 'Active(anon): 6390240 kB' 'Inactive(anon): 0 kB' 'Active(file): 160396 kB' 'Inactive(file): 279304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6698424 kB' 'Mapped: 67288 kB' 'AnonPages: 134708 kB' 'Shmem: 6258724 kB' 'KernelStack: 9752 kB' 'PageTables: 3592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128256 kB' 'Slab: 389972 kB' 'SReclaimable: 128256 kB' 'SUnreclaim: 261716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.067 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.068 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661484 kB' 'MemFree: 15133460 kB' 'MemUsed: 12528024 kB' 'SwapCached: 0 kB' 'Active: 7055264 kB' 'Inactive: 3361452 kB' 'Active(anon): 6707232 kB' 'Inactive(anon): 0 kB' 'Active(file): 348032 kB' 'Inactive(file): 3361452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9978144 kB' 'Mapped: 94364 kB' 'AnonPages: 438624 kB' 'Shmem: 6268660 kB' 'KernelStack: 6744 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334680 kB' 'Slab: 527556 kB' 'SReclaimable: 334680 kB' 'SUnreclaim: 192876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.069 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.070 node0=512 expecting 512 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:10.070 node1=512 expecting 512 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.070 00:03:10.070 real 0m3.563s 00:03:10.070 user 0m1.227s 00:03:10.070 sys 0m2.344s 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:10.070 10:23:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.070 ************************************ 00:03:10.070 END TEST even_2G_alloc 00:03:10.070 ************************************ 00:03:10.330 10:23:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:10.330 10:23:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:10.330 10:23:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:10.330 10:23:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.330 ************************************ 00:03:10.330 START TEST odd_alloc 00:03:10.330 ************************************ 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.330 10:23:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:13.624 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:13.624 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:13.624 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:13.624 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.624 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.624 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:13.624 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.624 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.624 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.624 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.624 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.624 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38431844 kB' 'MemAvailable: 42396212 kB' 'Buffers: 2704 kB' 'Cached: 16673932 kB' 'SwapCached: 0 kB' 'Active: 13614316 kB' 'Inactive: 3640756 kB' 'Active(anon): 13105888 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581332 kB' 'Mapped: 162528 kB' 'Shmem: 12527452 kB' 'KReclaimable: 462896 kB' 'Slab: 916668 kB' 'SReclaimable: 462896 kB' 'SUnreclaim: 453772 kB' 'KernelStack: 16448 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486728 kB' 'Committed_AS: 14472312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203480 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.625 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38432096 kB' 'MemAvailable: 42396400 kB' 'Buffers: 2704 kB' 'Cached: 16673932 kB' 'SwapCached: 0 kB' 'Active: 13613604 kB' 'Inactive: 3640756 kB' 'Active(anon): 13105176 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581116 kB' 'Mapped: 162428 kB' 'Shmem: 12527452 kB' 'KReclaimable: 462832 kB' 'Slab: 916596 kB' 'SReclaimable: 462832 kB' 'SUnreclaim: 453764 kB' 'KernelStack: 16464 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486728 kB' 'Committed_AS: 14472328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203448 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.626 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.627 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.628 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.628 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.628 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.628 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.628 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.891 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38432396 kB' 'MemAvailable: 42396700 kB' 'Buffers: 2704 kB' 'Cached: 16673952 kB' 'SwapCached: 0 kB' 'Active: 13614284 kB' 'Inactive: 3640756 kB' 'Active(anon): 13105856 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581736 kB' 'Mapped: 162428 kB' 'Shmem: 12527472 kB' 'KReclaimable: 462832 kB' 'Slab: 916588 kB' 'SReclaimable: 462832 kB' 'SUnreclaim: 453756 kB' 'KernelStack: 16480 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486728 kB' 'Committed_AS: 14479816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203448 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.892 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.893 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:13.894 nr_hugepages=1025 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.894 resv_hugepages=0 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.894 surplus_hugepages=0 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.894 anon_hugepages=0 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38441088 kB' 'MemAvailable: 42405392 kB' 'Buffers: 2704 kB' 'Cached: 16673972 kB' 'SwapCached: 0 kB' 'Active: 13613628 kB' 'Inactive: 3640756 kB' 'Active(anon): 13105200 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581068 kB' 'Mapped: 162436 kB' 'Shmem: 12527492 kB' 'KReclaimable: 462832 kB' 'Slab: 916572 kB' 'SReclaimable: 462832 kB' 'SUnreclaim: 453740 kB' 'KernelStack: 16416 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486728 kB' 'Committed_AS: 14472140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203400 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.894 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.895 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23290288 kB' 'MemUsed: 9343680 kB' 'SwapCached: 0 kB' 'Active: 6557748 kB' 'Inactive: 279304 kB' 'Active(anon): 6397352 kB' 'Inactive(anon): 0 kB' 'Active(file): 160396 kB' 'Inactive(file): 279304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6698428 kB' 'Mapped: 67300 kB' 'AnonPages: 141852 kB' 'Shmem: 6258728 kB' 'KernelStack: 9704 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128192 kB' 'Slab: 389340 kB' 'SReclaimable: 128192 kB' 'SUnreclaim: 261148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.896 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661484 kB' 'MemFree: 15151064 kB' 'MemUsed: 12510420 kB' 'SwapCached: 0 kB' 'Active: 7055704 kB' 'Inactive: 3361452 kB' 'Active(anon): 6707672 kB' 'Inactive(anon): 0 kB' 'Active(file): 348032 kB' 'Inactive(file): 3361452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9978292 kB' 'Mapped: 95128 kB' 'AnonPages: 438924 kB' 'Shmem: 6268808 kB' 'KernelStack: 6744 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334640 kB' 'Slab: 527232 kB' 'SReclaimable: 334640 kB' 'SUnreclaim: 192592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.897 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.898 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:13.899 node0=512 expecting 513 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:13.899 node1=513 expecting 512 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:13.899 00:03:13.899 real 0m3.638s 00:03:13.899 user 0m1.334s 00:03:13.899 sys 0m2.342s 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:13.899 10:24:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.899 ************************************ 00:03:13.899 END TEST odd_alloc 00:03:13.899 ************************************ 00:03:13.899 10:24:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:13.899 10:24:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:13.899 10:24:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:13.899 10:24:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.899 ************************************ 00:03:13.899 START TEST custom_alloc 00:03:13.899 ************************************ 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.899 10:24:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:17.208 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:17.208 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:17.208 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:17.208 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.208 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.208 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 37420884 kB' 'MemAvailable: 41385156 kB' 'Buffers: 2704 kB' 'Cached: 16674084 kB' 'SwapCached: 0 kB' 'Active: 13610832 kB' 'Inactive: 3640756 kB' 'Active(anon): 13102404 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578128 kB' 'Mapped: 161800 kB' 'Shmem: 12527604 kB' 'KReclaimable: 462800 kB' 'Slab: 916704 kB' 'SReclaimable: 462800 kB' 'SUnreclaim: 453904 kB' 'KernelStack: 16544 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963464 kB' 'Committed_AS: 14470744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.209 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 37419856 kB' 'MemAvailable: 41384128 kB' 'Buffers: 2704 kB' 'Cached: 16674088 kB' 'SwapCached: 0 kB' 'Active: 13613276 kB' 'Inactive: 3640756 kB' 'Active(anon): 13104848 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581060 kB' 'Mapped: 162180 kB' 'Shmem: 12527608 kB' 'KReclaimable: 462800 kB' 'Slab: 916744 kB' 'SReclaimable: 462800 kB' 'SUnreclaim: 453944 kB' 'KernelStack: 16544 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963464 kB' 'Committed_AS: 14472684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203396 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.210 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.211 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 37423960 kB' 'MemAvailable: 41388232 kB' 'Buffers: 2704 kB' 'Cached: 16674108 kB' 'SwapCached: 0 kB' 'Active: 13610836 kB' 'Inactive: 3640756 kB' 'Active(anon): 13102408 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578240 kB' 'Mapped: 161676 kB' 'Shmem: 12527628 kB' 'KReclaimable: 462800 kB' 'Slab: 916744 kB' 'SReclaimable: 462800 kB' 'SUnreclaim: 453944 kB' 'KernelStack: 16560 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963464 kB' 'Committed_AS: 14467192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203412 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.212 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.476 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.477 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:17.478 nr_hugepages=1536 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.478 resv_hugepages=0 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.478 surplus_hugepages=0 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.478 anon_hugepages=0 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 37422408 kB' 'MemAvailable: 41386680 kB' 'Buffers: 2704 kB' 'Cached: 16674124 kB' 'SwapCached: 0 kB' 'Active: 13613540 kB' 'Inactive: 3640756 kB' 'Active(anon): 13105112 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580796 kB' 'Mapped: 162180 kB' 'Shmem: 12527644 kB' 'KReclaimable: 462800 kB' 'Slab: 916712 kB' 'SReclaimable: 462800 kB' 'SUnreclaim: 453912 kB' 'KernelStack: 16464 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963464 kB' 'Committed_AS: 14472236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203396 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.478 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.479 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23304676 kB' 'MemUsed: 9329292 kB' 'SwapCached: 0 kB' 'Active: 6558000 kB' 'Inactive: 279304 kB' 'Active(anon): 6397604 kB' 'Inactive(anon): 0 kB' 'Active(file): 160396 kB' 'Inactive(file): 279304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6698536 kB' 'Mapped: 67312 kB' 'AnonPages: 141948 kB' 'Shmem: 6258836 kB' 'KernelStack: 9704 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128160 kB' 'Slab: 389400 kB' 'SReclaimable: 128160 kB' 'SUnreclaim: 261240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.480 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.481 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661484 kB' 'MemFree: 14113728 kB' 'MemUsed: 13547756 kB' 'SwapCached: 0 kB' 'Active: 7056640 kB' 'Inactive: 3361452 kB' 'Active(anon): 6708608 kB' 'Inactive(anon): 0 kB' 'Active(file): 348032 kB' 'Inactive(file): 3361452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9978320 kB' 'Mapped: 94364 kB' 'AnonPages: 439916 kB' 'Shmem: 6268836 kB' 'KernelStack: 6760 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334640 kB' 'Slab: 527312 kB' 'SReclaimable: 334640 kB' 'SUnreclaim: 192672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.482 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.483 node0=512 expecting 512 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:17.483 node1=1024 expecting 1024 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:17.483 00:03:17.483 real 0m3.519s 00:03:17.483 user 0m1.221s 00:03:17.483 sys 0m2.270s 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:17.483 10:24:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.483 ************************************ 00:03:17.483 END TEST custom_alloc 00:03:17.483 ************************************ 00:03:17.483 10:24:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:17.483 10:24:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:17.483 10:24:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:17.483 10:24:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.483 ************************************ 00:03:17.483 START TEST no_shrink_alloc 00:03:17.483 ************************************ 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.483 10:24:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:20.778 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:20.778 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:20.778 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:20.778 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.778 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.778 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38477208 kB' 'MemAvailable: 42441472 kB' 'Buffers: 2704 kB' 'Cached: 16674232 kB' 'SwapCached: 0 kB' 'Active: 13609080 kB' 'Inactive: 3640756 kB' 'Active(anon): 13100652 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575732 kB' 'Mapped: 161808 kB' 'Shmem: 12527752 kB' 'KReclaimable: 462792 kB' 'Slab: 916768 kB' 'SReclaimable: 462792 kB' 'SUnreclaim: 453976 kB' 'KernelStack: 16432 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203492 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.779 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38478524 kB' 'MemAvailable: 42442788 kB' 'Buffers: 2704 kB' 'Cached: 16674236 kB' 'SwapCached: 0 kB' 'Active: 13608324 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099896 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575440 kB' 'Mapped: 161704 kB' 'Shmem: 12527756 kB' 'KReclaimable: 462792 kB' 'Slab: 916720 kB' 'SReclaimable: 462792 kB' 'SUnreclaim: 453928 kB' 'KernelStack: 16416 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.780 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.781 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38478812 kB' 'MemAvailable: 42443076 kB' 'Buffers: 2704 kB' 'Cached: 16674252 kB' 'SwapCached: 0 kB' 'Active: 13608004 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099576 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575112 kB' 'Mapped: 161704 kB' 'Shmem: 12527772 kB' 'KReclaimable: 462792 kB' 'Slab: 916720 kB' 'SReclaimable: 462792 kB' 'SUnreclaim: 453928 kB' 'KernelStack: 16416 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203444 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.782 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.783 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.784 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.785 nr_hugepages=1024 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.785 resv_hugepages=0 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.785 surplus_hugepages=0 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.785 anon_hugepages=0 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38478560 kB' 'MemAvailable: 42442824 kB' 'Buffers: 2704 kB' 'Cached: 16674276 kB' 'SwapCached: 0 kB' 'Active: 13608032 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099604 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575108 kB' 'Mapped: 161704 kB' 'Shmem: 12527796 kB' 'KReclaimable: 462792 kB' 'Slab: 916720 kB' 'SReclaimable: 462792 kB' 'SUnreclaim: 453928 kB' 'KernelStack: 16416 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203444 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.785 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.786 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 22261492 kB' 'MemUsed: 10372476 kB' 'SwapCached: 0 kB' 'Active: 6551356 kB' 'Inactive: 279304 kB' 'Active(anon): 6390960 kB' 'Inactive(anon): 0 kB' 'Active(file): 160396 kB' 'Inactive(file): 279304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6698628 kB' 'Mapped: 67340 kB' 'AnonPages: 135104 kB' 'Shmem: 6258928 kB' 'KernelStack: 9656 kB' 'PageTables: 3520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128160 kB' 'Slab: 389368 kB' 'SReclaimable: 128160 kB' 'SUnreclaim: 261208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.787 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:20.788 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.789 node0=1024 expecting 1024 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.789 10:24:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:24.087 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:24.087 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:24.087 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:24.087 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.087 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.087 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:24.087 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:24.087 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.087 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.087 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.087 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38492300 kB' 'MemAvailable: 42456564 kB' 'Buffers: 2704 kB' 'Cached: 16674368 kB' 'SwapCached: 0 kB' 'Active: 13607976 kB' 'Inactive: 3640756 kB' 'Active(anon): 13099548 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574540 kB' 'Mapped: 161836 kB' 'Shmem: 12527888 kB' 'KReclaimable: 462792 kB' 'Slab: 916888 kB' 'SReclaimable: 462792 kB' 'SUnreclaim: 454096 kB' 'KernelStack: 16496 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38492880 kB' 'MemAvailable: 42457112 kB' 'Buffers: 2704 kB' 'Cached: 16674372 kB' 'SwapCached: 0 kB' 'Active: 13607132 kB' 'Inactive: 3640756 kB' 'Active(anon): 13098704 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574076 kB' 'Mapped: 161724 kB' 'Shmem: 12527892 kB' 'KReclaimable: 462760 kB' 'Slab: 916856 kB' 'SReclaimable: 462760 kB' 'SUnreclaim: 454096 kB' 'KernelStack: 16448 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.089 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.090 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38493204 kB' 'MemAvailable: 42457436 kB' 'Buffers: 2704 kB' 'Cached: 16674388 kB' 'SwapCached: 0 kB' 'Active: 13607140 kB' 'Inactive: 3640756 kB' 'Active(anon): 13098712 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574080 kB' 'Mapped: 161724 kB' 'Shmem: 12527908 kB' 'KReclaimable: 462760 kB' 'Slab: 916856 kB' 'SReclaimable: 462760 kB' 'SUnreclaim: 454096 kB' 'KernelStack: 16448 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.091 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.093 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.094 nr_hugepages=1024 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.094 resv_hugepages=0 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.094 surplus_hugepages=0 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.094 anon_hugepages=0 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295452 kB' 'MemFree: 38494028 kB' 'MemAvailable: 42458260 kB' 'Buffers: 2704 kB' 'Cached: 16674428 kB' 'SwapCached: 0 kB' 'Active: 13606808 kB' 'Inactive: 3640756 kB' 'Active(anon): 13098380 kB' 'Inactive(anon): 0 kB' 'Active(file): 508428 kB' 'Inactive(file): 3640756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573692 kB' 'Mapped: 161724 kB' 'Shmem: 12527948 kB' 'KReclaimable: 462760 kB' 'Slab: 916856 kB' 'SReclaimable: 462760 kB' 'SUnreclaim: 454096 kB' 'KernelStack: 16432 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487752 kB' 'Committed_AS: 14464824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 59840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2168128 kB' 'DirectMap2M: 31062016 kB' 'DirectMap1G: 35651584 kB' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.094 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.095 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 22283980 kB' 'MemUsed: 10349988 kB' 'SwapCached: 0 kB' 'Active: 6549808 kB' 'Inactive: 279304 kB' 'Active(anon): 6389412 kB' 'Inactive(anon): 0 kB' 'Active(file): 160396 kB' 'Inactive(file): 279304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6698636 kB' 'Mapped: 67360 kB' 'AnonPages: 133556 kB' 'Shmem: 6258936 kB' 'KernelStack: 9640 kB' 'PageTables: 3520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128160 kB' 'Slab: 389408 kB' 'SReclaimable: 128160 kB' 'SUnreclaim: 261248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.096 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.097 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.357 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.357 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.357 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.357 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.357 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.357 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.357 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.358 node0=1024 expecting 1024 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.358 00:03:24.358 real 0m6.666s 00:03:24.358 user 0m2.389s 00:03:24.358 sys 0m4.277s 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.358 10:24:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.358 ************************************ 00:03:24.358 END TEST no_shrink_alloc 00:03:24.358 ************************************ 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.358 10:24:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.358 00:03:24.358 real 0m24.989s 00:03:24.358 user 0m8.870s 00:03:24.358 sys 0m16.120s 00:03:24.358 10:24:12 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.358 10:24:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.358 ************************************ 00:03:24.358 END TEST hugepages 00:03:24.358 ************************************ 00:03:24.358 10:24:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:24.358 10:24:12 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.358 10:24:12 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.358 10:24:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.358 ************************************ 00:03:24.358 START TEST driver 00:03:24.358 ************************************ 00:03:24.358 10:24:12 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:24.358 * Looking for test storage... 00:03:24.358 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:24.358 10:24:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:24.358 10:24:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.358 10:24:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.645 10:24:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:29.645 10:24:17 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:29.645 10:24:17 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:29.645 10:24:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.645 ************************************ 00:03:29.645 START TEST guess_driver 00:03:29.645 ************************************ 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 167 > 0 )) 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:29.645 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:29.645 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:29.645 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:29.645 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:29.645 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:29.645 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:29.645 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:29.645 Looking for driver=vfio-pci 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.645 10:24:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.936 10:24:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.936 10:24:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:32.936 10:24:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:32.936 10:24:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.936 10:24:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.209 00:03:38.209 real 0m8.525s 00:03:38.209 user 0m2.677s 00:03:38.209 sys 0m5.107s 00:03:38.209 10:24:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.209 10:24:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.209 ************************************ 00:03:38.209 END TEST guess_driver 00:03:38.209 ************************************ 00:03:38.209 00:03:38.209 real 0m13.293s 00:03:38.209 user 0m3.850s 00:03:38.209 sys 0m7.804s 00:03:38.209 10:24:26 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.209 10:24:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.209 ************************************ 00:03:38.209 END TEST driver 00:03:38.209 ************************************ 00:03:38.209 10:24:26 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:38.209 10:24:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.209 10:24:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.209 10:24:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.209 ************************************ 00:03:38.209 START TEST devices 00:03:38.209 ************************************ 00:03:38.209 10:24:26 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:38.209 * Looking for test storage... 00:03:38.209 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:38.209 10:24:26 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:38.209 10:24:26 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:38.209 10:24:26 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.209 10:24:26 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:42.405 10:24:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:42.405 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:42.405 10:24:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:42.406 No valid GPT data, bailing 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:af:00.0 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:42.406 No valid GPT data, bailing 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:af:00.0 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:b0:00.0 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:03:42.406 No valid GPT data, bailing 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:42.406 10:24:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:42.406 10:24:30 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:b0:00.0 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:42.406 10:24:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:42.406 10:24:30 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.406 10:24:30 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.406 10:24:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:42.406 ************************************ 00:03:42.406 START TEST nvme_mount 00:03:42.406 ************************************ 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:42.406 10:24:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:42.976 Creating new GPT entries in memory. 00:03:42.976 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:42.976 other utilities. 00:03:42.976 10:24:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:42.976 10:24:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.976 10:24:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.976 10:24:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.976 10:24:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:44.357 Creating new GPT entries in memory. 00:03:44.357 The operation has completed successfully. 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3396608 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.357 10:24:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:47.017 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.017 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:47.017 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:47.017 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.017 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.017 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.277 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:47.536 10:24:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.536 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.536 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:47.536 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:47.536 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:47.536 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:47.536 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:47.795 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:47.795 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:03:47.795 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:47.795 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:47.795 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:47.795 10:24:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:47.795 10:24:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.795 10:24:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:47.795 10:24:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.054 10:24:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.346 10:24:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.635 10:24:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:54.635 10:24:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.635 10:24:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.635 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.905 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.905 00:03:54.905 real 0m12.937s 00:03:54.905 user 0m3.932s 00:03:54.905 sys 0m6.925s 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.905 10:24:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:54.905 ************************************ 00:03:54.905 END TEST nvme_mount 00:03:54.905 ************************************ 00:03:55.166 10:24:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:55.166 10:24:43 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.166 10:24:43 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.166 10:24:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.166 ************************************ 00:03:55.166 START TEST dm_mount 00:03:55.166 ************************************ 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.166 10:24:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:56.105 Creating new GPT entries in memory. 00:03:56.105 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.105 other utilities. 00:03:56.105 10:24:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.105 10:24:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.105 10:24:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.105 10:24:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.105 10:24:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:57.043 Creating new GPT entries in memory. 00:03:57.043 The operation has completed successfully. 00:03:57.043 10:24:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:57.043 10:24:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.043 10:24:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.043 10:24:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.043 10:24:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:58.422 The operation has completed successfully. 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3400736 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:58.422 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.423 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.423 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.423 10:24:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.423 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.423 10:24:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:00.965 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.965 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:00.965 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:00.965 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.965 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.965 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.224 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.483 10:24:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:04.776 10:24:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:04.776 10:24:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:04.776 10:24:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.776 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:05.035 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:05.035 00:04:05.035 real 0m10.018s 00:04:05.035 user 0m2.506s 00:04:05.035 sys 0m4.599s 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.035 10:24:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:05.035 ************************************ 00:04:05.035 END TEST dm_mount 00:04:05.035 ************************************ 00:04:05.035 10:24:53 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:05.035 10:24:53 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:05.035 10:24:53 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.035 10:24:53 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.035 10:24:53 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:05.035 10:24:53 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.035 10:24:53 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.293 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:05.293 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:05.293 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:05.293 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:05.293 10:24:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:05.293 10:24:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:05.293 10:24:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:05.293 10:24:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.293 10:24:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:05.293 10:24:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.293 10:24:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:05.293 00:04:05.293 real 0m27.731s 00:04:05.293 user 0m8.140s 00:04:05.293 sys 0m14.529s 00:04:05.293 10:24:53 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.293 10:24:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:05.293 ************************************ 00:04:05.293 END TEST devices 00:04:05.293 ************************************ 00:04:05.552 00:04:05.552 real 1m31.352s 00:04:05.552 user 0m28.647s 00:04:05.552 sys 0m53.723s 00:04:05.552 10:24:53 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.552 10:24:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.552 ************************************ 00:04:05.552 END TEST setup.sh 00:04:05.552 ************************************ 00:04:05.552 10:24:53 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:08.845 Hugepages 00:04:08.845 node hugesize free / total 00:04:08.845 node0 1048576kB 0 / 0 00:04:08.845 node0 2048kB 2048 / 2048 00:04:08.845 node1 1048576kB 0 / 0 00:04:08.845 node1 2048kB 0 / 0 00:04:08.845 00:04:08.845 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.845 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:08.845 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:08.845 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:08.845 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:08.845 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:08.845 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:08.845 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:08.845 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:08.845 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:08.845 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:08.845 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:08.845 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:08.845 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:08.845 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:08.845 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:08.845 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:08.845 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:08.845 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:04:08.845 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:04:08.845 10:24:57 -- spdk/autotest.sh@130 -- # uname -s 00:04:08.845 10:24:57 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:08.845 10:24:57 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:08.845 10:24:57 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:12.137 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:12.137 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:12.137 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.137 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.044 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:14.044 10:25:02 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:14.982 10:25:03 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:14.982 10:25:03 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:14.982 10:25:03 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.982 10:25:03 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:14.982 10:25:03 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:14.982 10:25:03 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:14.982 10:25:03 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.982 10:25:03 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.982 10:25:03 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:14.983 10:25:03 -- common/autotest_common.sh@1511 -- # (( 3 == 0 )) 00:04:14.983 10:25:03 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:14.983 10:25:03 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.275 Waiting for block devices as requested 00:04:18.275 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:04:18.275 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:04:18.534 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.534 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.534 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:18.792 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:18.792 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.792 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.051 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.051 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.051 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:04:19.310 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.310 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.310 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.569 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.569 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.569 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.569 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.830 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.830 10:25:08 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:19.830 10:25:08 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # grep 0000:5e:00.0/nvme/nvme 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:04:19.830 10:25:08 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:19.830 10:25:08 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:19.830 10:25:08 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:19.830 10:25:08 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1553 -- # continue 00:04:19.830 10:25:08 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:19.830 10:25:08 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # grep 0000:af:00.0/nvme/nvme 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:19.830 10:25:08 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:19.830 10:25:08 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:04:19.830 10:25:08 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:04:19.830 10:25:08 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # oacs=' 0x7' 00:04:19.830 10:25:08 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1544 -- # [[ 0 -ne 0 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:19.830 10:25:08 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # grep 0000:b0:00.0/nvme/nvme 00:04:19.830 10:25:08 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:19.830 10:25:08 -- common/autotest_common.sh@1541 -- # oacs=' 0x7' 00:04:19.830 10:25:08 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=0 00:04:19.830 10:25:08 -- common/autotest_common.sh@1544 -- # [[ 0 -ne 0 ]] 00:04:19.830 10:25:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:19.830 10:25:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.830 10:25:08 -- common/autotest_common.sh@10 -- # set +x 00:04:19.830 10:25:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:19.830 10:25:08 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:19.830 10:25:08 -- common/autotest_common.sh@10 -- # set +x 00:04:20.090 10:25:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:23.381 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:23.381 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:23.381 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:23.381 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.381 10:25:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:23.381 10:25:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.381 10:25:11 -- common/autotest_common.sh@10 -- # set +x 00:04:23.381 10:25:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:23.381 10:25:11 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:23.381 10:25:11 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:23.381 10:25:11 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:23.381 10:25:11 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:23.381 10:25:11 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:23.381 10:25:11 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:23.381 10:25:11 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:23.381 10:25:11 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.381 10:25:11 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:23.381 10:25:11 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:23.640 10:25:11 -- common/autotest_common.sh@1511 -- # (( 3 == 0 )) 00:04:23.640 10:25:11 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:23.640 10:25:11 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:23.640 10:25:11 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:23.640 10:25:11 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:04:23.640 10:25:11 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:23.640 10:25:11 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:23.640 10:25:11 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:af:00.0/device 00:04:23.640 10:25:11 -- common/autotest_common.sh@1576 -- # device=0x2701 00:04:23.640 10:25:11 -- common/autotest_common.sh@1577 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:23.640 10:25:11 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:23.640 10:25:11 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device 00:04:23.640 10:25:11 -- common/autotest_common.sh@1576 -- # device=0x2701 00:04:23.640 10:25:11 -- common/autotest_common.sh@1577 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:23.640 10:25:11 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:04:23.640 10:25:11 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:04:23.640 10:25:11 -- common/autotest_common.sh@1589 -- # return 0 00:04:23.640 10:25:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:23.640 10:25:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:23.640 10:25:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:23.640 10:25:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:23.640 10:25:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:23.640 10:25:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:23.640 10:25:11 -- common/autotest_common.sh@10 -- # set +x 00:04:23.640 10:25:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:23.640 10:25:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:23.640 10:25:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.640 10:25:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.640 10:25:11 -- common/autotest_common.sh@10 -- # set +x 00:04:23.640 ************************************ 00:04:23.640 START TEST env 00:04:23.640 ************************************ 00:04:23.640 10:25:12 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:23.640 * Looking for test storage... 00:04:23.640 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:23.640 10:25:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:23.640 10:25:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.640 10:25:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.640 10:25:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.900 ************************************ 00:04:23.900 START TEST env_memory 00:04:23.900 ************************************ 00:04:23.900 10:25:12 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:23.900 00:04:23.900 00:04:23.900 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.900 http://cunit.sourceforge.net/ 00:04:23.900 00:04:23.900 00:04:23.900 Suite: memory 00:04:23.900 Test: alloc and free memory map ...[2024-07-23 10:25:12.191636] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.900 passed 00:04:23.900 Test: mem map translation ...[2024-07-23 10:25:12.204953] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.900 [2024-07-23 10:25:12.204970] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.900 [2024-07-23 10:25:12.205002] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.900 [2024-07-23 10:25:12.205015] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.900 passed 00:04:23.900 Test: mem map registration ...[2024-07-23 10:25:12.226967] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:23.900 [2024-07-23 10:25:12.226982] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:23.900 passed 00:04:23.900 Test: mem map adjacent registrations ...passed 00:04:23.900 00:04:23.900 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.900 suites 1 1 n/a 0 0 00:04:23.900 tests 4 4 4 0 0 00:04:23.900 asserts 152 152 152 0 n/a 00:04:23.900 00:04:23.900 Elapsed time = 0.088 seconds 00:04:23.900 00:04:23.900 real 0m0.101s 00:04:23.900 user 0m0.092s 00:04:23.900 sys 0m0.008s 00:04:23.900 10:25:12 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.900 10:25:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.900 ************************************ 00:04:23.900 END TEST env_memory 00:04:23.900 ************************************ 00:04:23.900 10:25:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:23.900 10:25:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.900 10:25:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.900 10:25:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.900 ************************************ 00:04:23.900 START TEST env_vtophys 00:04:23.900 ************************************ 00:04:23.900 10:25:12 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:23.900 EAL: lib.eal log level changed from notice to debug 00:04:23.900 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.900 EAL: Detected lcore 1 as core 1 on socket 0 00:04:23.900 EAL: Detected lcore 2 as core 2 on socket 0 00:04:23.900 EAL: Detected lcore 3 as core 3 on socket 0 00:04:23.900 EAL: Detected lcore 4 as core 4 on socket 0 00:04:23.900 EAL: Detected lcore 5 as core 8 on socket 0 00:04:23.900 EAL: Detected lcore 6 as core 9 on socket 0 00:04:23.900 EAL: Detected lcore 7 as core 10 on socket 0 00:04:23.900 EAL: Detected lcore 8 as core 11 on socket 0 00:04:23.900 EAL: Detected lcore 9 as core 16 on socket 0 00:04:23.900 EAL: Detected lcore 10 as core 17 on socket 0 00:04:23.900 EAL: Detected lcore 11 as core 18 on socket 0 00:04:23.900 EAL: Detected lcore 12 as core 19 on socket 0 00:04:23.900 EAL: Detected lcore 13 as core 20 on socket 0 00:04:23.900 EAL: Detected lcore 14 as core 24 on socket 0 00:04:23.900 EAL: Detected lcore 15 as core 25 on socket 0 00:04:23.900 EAL: Detected lcore 16 as core 26 on socket 0 00:04:23.900 EAL: Detected lcore 17 as core 27 on socket 0 00:04:23.900 EAL: Detected lcore 18 as core 0 on socket 1 00:04:23.900 EAL: Detected lcore 19 as core 1 on socket 1 00:04:23.900 EAL: Detected lcore 20 as core 2 on socket 1 00:04:23.900 EAL: Detected lcore 21 as core 3 on socket 1 00:04:23.900 EAL: Detected lcore 22 as core 4 on socket 1 00:04:23.900 EAL: Detected lcore 23 as core 8 on socket 1 00:04:23.900 EAL: Detected lcore 24 as core 9 on socket 1 00:04:23.900 EAL: Detected lcore 25 as core 10 on socket 1 00:04:23.900 EAL: Detected lcore 26 as core 11 on socket 1 00:04:23.900 EAL: Detected lcore 27 as core 16 on socket 1 00:04:23.900 EAL: Detected lcore 28 as core 17 on socket 1 00:04:23.900 EAL: Detected lcore 29 as core 18 on socket 1 00:04:23.900 EAL: Detected lcore 30 as core 19 on socket 1 00:04:23.900 EAL: Detected lcore 31 as core 20 on socket 1 00:04:23.900 EAL: Detected lcore 32 as core 24 on socket 1 00:04:23.900 EAL: Detected lcore 33 as core 25 on socket 1 00:04:23.900 EAL: Detected lcore 34 as core 26 on socket 1 00:04:23.900 EAL: Detected lcore 35 as core 27 on socket 1 00:04:23.900 EAL: Detected lcore 36 as core 0 on socket 0 00:04:23.900 EAL: Detected lcore 37 as core 1 on socket 0 00:04:23.901 EAL: Detected lcore 38 as core 2 on socket 0 00:04:23.901 EAL: Detected lcore 39 as core 3 on socket 0 00:04:23.901 EAL: Detected lcore 40 as core 4 on socket 0 00:04:23.901 EAL: Detected lcore 41 as core 8 on socket 0 00:04:23.901 EAL: Detected lcore 42 as core 9 on socket 0 00:04:23.901 EAL: Detected lcore 43 as core 10 on socket 0 00:04:23.901 EAL: Detected lcore 44 as core 11 on socket 0 00:04:23.901 EAL: Detected lcore 45 as core 16 on socket 0 00:04:23.901 EAL: Detected lcore 46 as core 17 on socket 0 00:04:23.901 EAL: Detected lcore 47 as core 18 on socket 0 00:04:23.901 EAL: Detected lcore 48 as core 19 on socket 0 00:04:23.901 EAL: Detected lcore 49 as core 20 on socket 0 00:04:23.901 EAL: Detected lcore 50 as core 24 on socket 0 00:04:23.901 EAL: Detected lcore 51 as core 25 on socket 0 00:04:23.901 EAL: Detected lcore 52 as core 26 on socket 0 00:04:23.901 EAL: Detected lcore 53 as core 27 on socket 0 00:04:23.901 EAL: Detected lcore 54 as core 0 on socket 1 00:04:23.901 EAL: Detected lcore 55 as core 1 on socket 1 00:04:23.901 EAL: Detected lcore 56 as core 2 on socket 1 00:04:23.901 EAL: Detected lcore 57 as core 3 on socket 1 00:04:23.901 EAL: Detected lcore 58 as core 4 on socket 1 00:04:23.901 EAL: Detected lcore 59 as core 8 on socket 1 00:04:23.901 EAL: Detected lcore 60 as core 9 on socket 1 00:04:23.901 EAL: Detected lcore 61 as core 10 on socket 1 00:04:23.901 EAL: Detected lcore 62 as core 11 on socket 1 00:04:23.901 EAL: Detected lcore 63 as core 16 on socket 1 00:04:23.901 EAL: Detected lcore 64 as core 17 on socket 1 00:04:23.901 EAL: Detected lcore 65 as core 18 on socket 1 00:04:23.901 EAL: Detected lcore 66 as core 19 on socket 1 00:04:23.901 EAL: Detected lcore 67 as core 20 on socket 1 00:04:23.901 EAL: Detected lcore 68 as core 24 on socket 1 00:04:23.901 EAL: Detected lcore 69 as core 25 on socket 1 00:04:23.901 EAL: Detected lcore 70 as core 26 on socket 1 00:04:23.901 EAL: Detected lcore 71 as core 27 on socket 1 00:04:23.901 EAL: Maximum logical cores by configuration: 128 00:04:23.901 EAL: Detected CPU lcores: 72 00:04:23.901 EAL: Detected NUMA nodes: 2 00:04:23.901 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:23.901 EAL: Checking presence of .so 'librte_eal.so.23' 00:04:23.901 EAL: Checking presence of .so 'librte_eal.so' 00:04:23.901 EAL: Detected static linkage of DPDK 00:04:23.901 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.901 EAL: Bus pci wants IOVA as 'DC' 00:04:23.901 EAL: Buses did not request a specific IOVA mode. 00:04:23.901 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:23.901 EAL: Selected IOVA mode 'VA' 00:04:23.901 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.901 EAL: Probing VFIO support... 00:04:23.901 EAL: IOMMU type 1 (Type 1) is supported 00:04:23.901 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:23.901 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:23.901 EAL: VFIO support initialized 00:04:23.901 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.901 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.901 EAL: Setting up physically contiguous memory... 00:04:23.901 EAL: Setting maximum number of open files to 524288 00:04:23.901 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.901 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:23.901 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.901 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:23.901 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.901 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:23.901 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.901 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.901 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:23.901 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:23.901 EAL: Hugepages will be freed exactly as allocated. 00:04:23.901 EAL: No shared files mode enabled, IPC is disabled 00:04:23.901 EAL: No shared files mode enabled, IPC is disabled 00:04:23.901 EAL: TSC frequency is ~2300000 KHz 00:04:23.901 EAL: Main lcore 0 is ready (tid=7f4d87c41a00;cpuset=[0]) 00:04:23.901 EAL: Trying to obtain current memory policy. 00:04:23.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.901 EAL: Restoring previous memory policy: 0 00:04:23.901 EAL: request: mp_malloc_sync 00:04:23.901 EAL: No shared files mode enabled, IPC is disabled 00:04:23.901 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.901 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Mem event callback 'spdk:(nil)' registered 00:04:24.160 00:04:24.160 00:04:24.160 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.160 http://cunit.sourceforge.net/ 00:04:24.160 00:04:24.160 00:04:24.160 Suite: components_suite 00:04:24.160 Test: vtophys_malloc_test ...passed 00:04:24.160 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 4MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 4MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 6MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 6MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 10MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 10MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 18MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 18MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 34MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 34MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 66MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 66MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 130MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 130MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 258MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.419 EAL: request: mp_malloc_sync 00:04:24.419 EAL: No shared files mode enabled, IPC is disabled 00:04:24.419 EAL: Heap on socket 0 was shrunk by 258MB 00:04:24.419 EAL: Trying to obtain current memory policy. 00:04:24.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.419 EAL: Restoring previous memory policy: 4 00:04:24.419 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.419 EAL: request: mp_malloc_sync 00:04:24.419 EAL: No shared files mode enabled, IPC is disabled 00:04:24.419 EAL: Heap on socket 0 was expanded by 514MB 00:04:24.419 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.678 EAL: request: mp_malloc_sync 00:04:24.678 EAL: No shared files mode enabled, IPC is disabled 00:04:24.678 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.678 EAL: Trying to obtain current memory policy. 00:04:24.678 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.938 EAL: Restoring previous memory policy: 4 00:04:24.938 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.938 EAL: request: mp_malloc_sync 00:04:24.938 EAL: No shared files mode enabled, IPC is disabled 00:04:24.938 EAL: Heap on socket 0 was expanded by 1026MB 00:04:24.938 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.200 EAL: request: mp_malloc_sync 00:04:25.200 EAL: No shared files mode enabled, IPC is disabled 00:04:25.200 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:25.200 passed 00:04:25.200 00:04:25.200 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.200 suites 1 1 n/a 0 0 00:04:25.200 tests 2 2 2 0 0 00:04:25.200 asserts 497 497 497 0 n/a 00:04:25.200 00:04:25.200 Elapsed time = 1.130 seconds 00:04:25.200 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.200 EAL: request: mp_malloc_sync 00:04:25.200 EAL: No shared files mode enabled, IPC is disabled 00:04:25.200 EAL: Heap on socket 0 was shrunk by 2MB 00:04:25.200 EAL: No shared files mode enabled, IPC is disabled 00:04:25.200 EAL: No shared files mode enabled, IPC is disabled 00:04:25.200 EAL: No shared files mode enabled, IPC is disabled 00:04:25.200 00:04:25.200 real 0m1.231s 00:04:25.200 user 0m0.698s 00:04:25.200 sys 0m0.509s 00:04:25.200 10:25:13 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.200 10:25:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:25.200 ************************************ 00:04:25.200 END TEST env_vtophys 00:04:25.200 ************************************ 00:04:25.201 10:25:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.201 10:25:13 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.201 10:25:13 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.201 10:25:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.201 ************************************ 00:04:25.201 START TEST env_pci 00:04:25.201 ************************************ 00:04:25.201 10:25:13 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.201 00:04:25.201 00:04:25.201 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.201 http://cunit.sourceforge.net/ 00:04:25.201 00:04:25.201 00:04:25.201 Suite: pci 00:04:25.201 Test: pci_hook ...[2024-07-23 10:25:13.645226] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3409860 has claimed it 00:04:25.201 EAL: Cannot find device (10000:00:01.0) 00:04:25.201 EAL: Failed to attach device on primary process 00:04:25.201 passed 00:04:25.201 00:04:25.201 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.201 suites 1 1 n/a 0 0 00:04:25.201 tests 1 1 1 0 0 00:04:25.201 asserts 25 25 25 0 n/a 00:04:25.201 00:04:25.201 Elapsed time = 0.024 seconds 00:04:25.201 00:04:25.201 real 0m0.034s 00:04:25.201 user 0m0.009s 00:04:25.201 sys 0m0.025s 00:04:25.201 10:25:13 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.201 10:25:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:25.201 ************************************ 00:04:25.201 END TEST env_pci 00:04:25.201 ************************************ 00:04:25.540 10:25:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:25.540 10:25:13 env -- env/env.sh@15 -- # uname 00:04:25.540 10:25:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:25.540 10:25:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:25.540 10:25:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.540 10:25:13 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:25.540 10:25:13 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.540 10:25:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.540 ************************************ 00:04:25.540 START TEST env_dpdk_post_init 00:04:25.540 ************************************ 00:04:25.540 10:25:13 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.540 EAL: Detected CPU lcores: 72 00:04:25.540 EAL: Detected NUMA nodes: 2 00:04:25.540 EAL: Detected static linkage of DPDK 00:04:25.540 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.540 EAL: Selected IOVA mode 'VA' 00:04:25.540 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.540 EAL: VFIO support initialized 00:04:25.540 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.540 EAL: Using IOMMU type 1 (Type 1) 00:04:25.819 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:04:26.092 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1) 00:04:26.092 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1) 00:04:26.092 EAL: Releasing PCI mapped resource for 0000:af:00.0 00:04:26.092 EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001004000 00:04:26.351 EAL: Releasing PCI mapped resource for 0000:b0:00.0 00:04:26.351 EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001008000 00:04:26.351 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:26.351 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:04:26.611 Starting DPDK initialization... 00:04:26.611 Starting SPDK post initialization... 00:04:26.611 SPDK NVMe probe 00:04:26.611 Attaching to 0000:5e:00.0 00:04:26.611 Attaching to 0000:af:00.0 00:04:26.611 Attaching to 0000:b0:00.0 00:04:26.611 Attached to 0000:af:00.0 00:04:26.611 Attached to 0000:b0:00.0 00:04:26.611 Attached to 0000:5e:00.0 00:04:26.611 Cleaning up... 00:04:26.611 00:04:26.611 real 0m1.130s 00:04:26.611 user 0m0.350s 00:04:26.611 sys 0m0.099s 00:04:26.611 10:25:14 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.611 10:25:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.611 ************************************ 00:04:26.611 END TEST env_dpdk_post_init 00:04:26.611 ************************************ 00:04:26.611 10:25:14 env -- env/env.sh@26 -- # uname 00:04:26.611 10:25:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.611 10:25:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.611 10:25:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.611 10:25:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.611 10:25:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.611 ************************************ 00:04:26.611 START TEST env_mem_callbacks 00:04:26.611 ************************************ 00:04:26.611 10:25:14 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.611 EAL: Detected CPU lcores: 72 00:04:26.611 EAL: Detected NUMA nodes: 2 00:04:26.611 EAL: Detected static linkage of DPDK 00:04:26.611 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.611 EAL: Selected IOVA mode 'VA' 00:04:26.611 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.611 EAL: VFIO support initialized 00:04:26.611 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.611 00:04:26.611 00:04:26.611 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.611 http://cunit.sourceforge.net/ 00:04:26.611 00:04:26.611 00:04:26.611 Suite: memory 00:04:26.611 Test: test ... 00:04:26.611 register 0x200000200000 2097152 00:04:26.611 malloc 3145728 00:04:26.611 register 0x200000400000 4194304 00:04:26.611 buf 0x200000500000 len 3145728 PASSED 00:04:26.611 malloc 64 00:04:26.611 buf 0x2000004fff40 len 64 PASSED 00:04:26.611 malloc 4194304 00:04:26.611 register 0x200000800000 6291456 00:04:26.611 buf 0x200000a00000 len 4194304 PASSED 00:04:26.611 free 0x200000500000 3145728 00:04:26.611 free 0x2000004fff40 64 00:04:26.611 unregister 0x200000400000 4194304 PASSED 00:04:26.611 free 0x200000a00000 4194304 00:04:26.611 unregister 0x200000800000 6291456 PASSED 00:04:26.611 malloc 8388608 00:04:26.611 register 0x200000400000 10485760 00:04:26.611 buf 0x200000600000 len 8388608 PASSED 00:04:26.611 free 0x200000600000 8388608 00:04:26.611 unregister 0x200000400000 10485760 PASSED 00:04:26.611 passed 00:04:26.611 00:04:26.611 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.611 suites 1 1 n/a 0 0 00:04:26.611 tests 1 1 1 0 0 00:04:26.611 asserts 15 15 15 0 n/a 00:04:26.611 00:04:26.611 Elapsed time = 0.006 seconds 00:04:26.611 00:04:26.611 real 0m0.065s 00:04:26.611 user 0m0.017s 00:04:26.611 sys 0m0.047s 00:04:26.611 10:25:15 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.611 10:25:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:26.611 ************************************ 00:04:26.611 END TEST env_mem_callbacks 00:04:26.611 ************************************ 00:04:26.611 00:04:26.611 real 0m3.057s 00:04:26.611 user 0m1.355s 00:04:26.611 sys 0m1.037s 00:04:26.611 10:25:15 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.611 10:25:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.611 ************************************ 00:04:26.611 END TEST env 00:04:26.611 ************************************ 00:04:26.871 10:25:15 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.871 10:25:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.871 10:25:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.871 10:25:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.871 ************************************ 00:04:26.871 START TEST rpc 00:04:26.871 ************************************ 00:04:26.871 10:25:15 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.871 * Looking for test storage... 00:04:26.871 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:26.871 10:25:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3410160 00:04:26.871 10:25:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.871 10:25:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.871 10:25:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3410160 00:04:26.871 10:25:15 rpc -- common/autotest_common.sh@827 -- # '[' -z 3410160 ']' 00:04:26.871 10:25:15 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.871 10:25:15 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:26.871 10:25:15 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.871 10:25:15 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:26.871 10:25:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.871 [2024-07-23 10:25:15.300460] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:26.871 [2024-07-23 10:25:15.300544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410160 ] 00:04:26.871 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.871 [2024-07-23 10:25:15.370062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.131 [2024-07-23 10:25:15.412505] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.131 [2024-07-23 10:25:15.412549] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3410160' to capture a snapshot of events at runtime. 00:04:27.131 [2024-07-23 10:25:15.412559] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:27.131 [2024-07-23 10:25:15.412584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:27.131 [2024-07-23 10:25:15.412592] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3410160 for offline analysis/debug. 00:04:27.131 [2024-07-23 10:25:15.412617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.131 10:25:15 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:27.131 10:25:15 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:27.131 10:25:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:27.131 10:25:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:27.131 10:25:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.131 10:25:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.131 10:25:15 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.131 10:25:15 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.131 10:25:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.390 ************************************ 00:04:27.390 START TEST rpc_integrity 00:04:27.390 ************************************ 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.390 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.390 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.390 { 00:04:27.390 "name": "Malloc0", 00:04:27.390 "aliases": [ 00:04:27.390 "fa32145e-0562-494c-a499-44f0dcf50e1b" 00:04:27.390 ], 00:04:27.390 "product_name": "Malloc disk", 00:04:27.390 "block_size": 512, 00:04:27.390 "num_blocks": 16384, 00:04:27.390 "uuid": "fa32145e-0562-494c-a499-44f0dcf50e1b", 00:04:27.390 "assigned_rate_limits": { 00:04:27.390 "rw_ios_per_sec": 0, 00:04:27.390 "rw_mbytes_per_sec": 0, 00:04:27.390 "r_mbytes_per_sec": 0, 00:04:27.390 "w_mbytes_per_sec": 0 00:04:27.390 }, 00:04:27.390 "claimed": false, 00:04:27.390 "zoned": false, 00:04:27.390 "supported_io_types": { 00:04:27.390 "read": true, 00:04:27.390 "write": true, 00:04:27.390 "unmap": true, 00:04:27.390 "write_zeroes": true, 00:04:27.390 "flush": true, 00:04:27.390 "reset": true, 00:04:27.390 "compare": false, 00:04:27.390 "compare_and_write": false, 00:04:27.390 "abort": true, 00:04:27.390 "nvme_admin": false, 00:04:27.390 "nvme_io": false 00:04:27.390 }, 00:04:27.390 "memory_domains": [ 00:04:27.390 { 00:04:27.390 "dma_device_id": "system", 00:04:27.390 "dma_device_type": 1 00:04:27.390 }, 00:04:27.390 { 00:04:27.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.390 "dma_device_type": 2 00:04:27.390 } 00:04:27.390 ], 00:04:27.390 "driver_specific": {} 00:04:27.390 } 00:04:27.390 ]' 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.391 [2024-07-23 10:25:15.779089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.391 [2024-07-23 10:25:15.779127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.391 [2024-07-23 10:25:15.779151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x588a370 00:04:27.391 [2024-07-23 10:25:15.779161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.391 [2024-07-23 10:25:15.780038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.391 [2024-07-23 10:25:15.780062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.391 Passthru0 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.391 { 00:04:27.391 "name": "Malloc0", 00:04:27.391 "aliases": [ 00:04:27.391 "fa32145e-0562-494c-a499-44f0dcf50e1b" 00:04:27.391 ], 00:04:27.391 "product_name": "Malloc disk", 00:04:27.391 "block_size": 512, 00:04:27.391 "num_blocks": 16384, 00:04:27.391 "uuid": "fa32145e-0562-494c-a499-44f0dcf50e1b", 00:04:27.391 "assigned_rate_limits": { 00:04:27.391 "rw_ios_per_sec": 0, 00:04:27.391 "rw_mbytes_per_sec": 0, 00:04:27.391 "r_mbytes_per_sec": 0, 00:04:27.391 "w_mbytes_per_sec": 0 00:04:27.391 }, 00:04:27.391 "claimed": true, 00:04:27.391 "claim_type": "exclusive_write", 00:04:27.391 "zoned": false, 00:04:27.391 "supported_io_types": { 00:04:27.391 "read": true, 00:04:27.391 "write": true, 00:04:27.391 "unmap": true, 00:04:27.391 "write_zeroes": true, 00:04:27.391 "flush": true, 00:04:27.391 "reset": true, 00:04:27.391 "compare": false, 00:04:27.391 "compare_and_write": false, 00:04:27.391 "abort": true, 00:04:27.391 "nvme_admin": false, 00:04:27.391 "nvme_io": false 00:04:27.391 }, 00:04:27.391 "memory_domains": [ 00:04:27.391 { 00:04:27.391 "dma_device_id": "system", 00:04:27.391 "dma_device_type": 1 00:04:27.391 }, 00:04:27.391 { 00:04:27.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.391 "dma_device_type": 2 00:04:27.391 } 00:04:27.391 ], 00:04:27.391 "driver_specific": {} 00:04:27.391 }, 00:04:27.391 { 00:04:27.391 "name": "Passthru0", 00:04:27.391 "aliases": [ 00:04:27.391 "df22c387-1ed9-54e1-957b-56e0055eaacf" 00:04:27.391 ], 00:04:27.391 "product_name": "passthru", 00:04:27.391 "block_size": 512, 00:04:27.391 "num_blocks": 16384, 00:04:27.391 "uuid": "df22c387-1ed9-54e1-957b-56e0055eaacf", 00:04:27.391 "assigned_rate_limits": { 00:04:27.391 "rw_ios_per_sec": 0, 00:04:27.391 "rw_mbytes_per_sec": 0, 00:04:27.391 "r_mbytes_per_sec": 0, 00:04:27.391 "w_mbytes_per_sec": 0 00:04:27.391 }, 00:04:27.391 "claimed": false, 00:04:27.391 "zoned": false, 00:04:27.391 "supported_io_types": { 00:04:27.391 "read": true, 00:04:27.391 "write": true, 00:04:27.391 "unmap": true, 00:04:27.391 "write_zeroes": true, 00:04:27.391 "flush": true, 00:04:27.391 "reset": true, 00:04:27.391 "compare": false, 00:04:27.391 "compare_and_write": false, 00:04:27.391 "abort": true, 00:04:27.391 "nvme_admin": false, 00:04:27.391 "nvme_io": false 00:04:27.391 }, 00:04:27.391 "memory_domains": [ 00:04:27.391 { 00:04:27.391 "dma_device_id": "system", 00:04:27.391 "dma_device_type": 1 00:04:27.391 }, 00:04:27.391 { 00:04:27.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.391 "dma_device_type": 2 00:04:27.391 } 00:04:27.391 ], 00:04:27.391 "driver_specific": { 00:04:27.391 "passthru": { 00:04:27.391 "name": "Passthru0", 00:04:27.391 "base_bdev_name": "Malloc0" 00:04:27.391 } 00:04:27.391 } 00:04:27.391 } 00:04:27.391 ]' 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.391 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.391 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.651 10:25:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.651 00:04:27.651 real 0m0.276s 00:04:27.651 user 0m0.174s 00:04:27.651 sys 0m0.044s 00:04:27.651 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.651 10:25:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.651 ************************************ 00:04:27.651 END TEST rpc_integrity 00:04:27.651 ************************************ 00:04:27.651 10:25:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.651 10:25:15 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.651 10:25:15 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.651 10:25:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.651 ************************************ 00:04:27.651 START TEST rpc_plugins 00:04:27.651 ************************************ 00:04:27.651 10:25:15 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:27.651 10:25:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.651 10:25:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.651 10:25:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.651 { 00:04:27.651 "name": "Malloc1", 00:04:27.651 "aliases": [ 00:04:27.651 "992d9969-3a72-48b1-8927-b8f3525c572c" 00:04:27.651 ], 00:04:27.651 "product_name": "Malloc disk", 00:04:27.651 "block_size": 4096, 00:04:27.651 "num_blocks": 256, 00:04:27.651 "uuid": "992d9969-3a72-48b1-8927-b8f3525c572c", 00:04:27.651 "assigned_rate_limits": { 00:04:27.651 "rw_ios_per_sec": 0, 00:04:27.651 "rw_mbytes_per_sec": 0, 00:04:27.651 "r_mbytes_per_sec": 0, 00:04:27.651 "w_mbytes_per_sec": 0 00:04:27.651 }, 00:04:27.651 "claimed": false, 00:04:27.651 "zoned": false, 00:04:27.651 "supported_io_types": { 00:04:27.651 "read": true, 00:04:27.651 "write": true, 00:04:27.651 "unmap": true, 00:04:27.651 "write_zeroes": true, 00:04:27.651 "flush": true, 00:04:27.651 "reset": true, 00:04:27.651 "compare": false, 00:04:27.651 "compare_and_write": false, 00:04:27.651 "abort": true, 00:04:27.651 "nvme_admin": false, 00:04:27.651 "nvme_io": false 00:04:27.651 }, 00:04:27.651 "memory_domains": [ 00:04:27.651 { 00:04:27.651 "dma_device_id": "system", 00:04:27.651 "dma_device_type": 1 00:04:27.651 }, 00:04:27.651 { 00:04:27.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.651 "dma_device_type": 2 00:04:27.651 } 00:04:27.651 ], 00:04:27.651 "driver_specific": {} 00:04:27.651 } 00:04:27.651 ]' 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.651 10:25:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.651 00:04:27.651 real 0m0.132s 00:04:27.651 user 0m0.082s 00:04:27.651 sys 0m0.025s 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.651 10:25:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.651 ************************************ 00:04:27.651 END TEST rpc_plugins 00:04:27.651 ************************************ 00:04:27.911 10:25:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.911 10:25:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.911 10:25:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.911 10:25:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.911 ************************************ 00:04:27.911 START TEST rpc_trace_cmd_test 00:04:27.911 ************************************ 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.911 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3410160", 00:04:27.911 "tpoint_group_mask": "0x8", 00:04:27.911 "iscsi_conn": { 00:04:27.911 "mask": "0x2", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "scsi": { 00:04:27.911 "mask": "0x4", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "bdev": { 00:04:27.911 "mask": "0x8", 00:04:27.911 "tpoint_mask": "0xffffffffffffffff" 00:04:27.911 }, 00:04:27.911 "nvmf_rdma": { 00:04:27.911 "mask": "0x10", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "nvmf_tcp": { 00:04:27.911 "mask": "0x20", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "ftl": { 00:04:27.911 "mask": "0x40", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "blobfs": { 00:04:27.911 "mask": "0x80", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "dsa": { 00:04:27.911 "mask": "0x200", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "thread": { 00:04:27.911 "mask": "0x400", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "nvme_pcie": { 00:04:27.911 "mask": "0x800", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "iaa": { 00:04:27.911 "mask": "0x1000", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "nvme_tcp": { 00:04:27.911 "mask": "0x2000", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "bdev_nvme": { 00:04:27.911 "mask": "0x4000", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 }, 00:04:27.911 "sock": { 00:04:27.911 "mask": "0x8000", 00:04:27.911 "tpoint_mask": "0x0" 00:04:27.911 } 00:04:27.911 }' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.911 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.171 10:25:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.171 00:04:28.171 real 0m0.220s 00:04:28.171 user 0m0.181s 00:04:28.171 sys 0m0.031s 00:04:28.171 10:25:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.171 10:25:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.171 ************************************ 00:04:28.171 END TEST rpc_trace_cmd_test 00:04:28.171 ************************************ 00:04:28.171 10:25:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.171 10:25:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.171 10:25:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.171 10:25:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.171 10:25:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.171 10:25:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.171 ************************************ 00:04:28.171 START TEST rpc_daemon_integrity 00:04:28.171 ************************************ 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.171 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.171 { 00:04:28.171 "name": "Malloc2", 00:04:28.171 "aliases": [ 00:04:28.171 "a0c59c94-9822-4a15-9934-6ee48b862a5e" 00:04:28.171 ], 00:04:28.171 "product_name": "Malloc disk", 00:04:28.171 "block_size": 512, 00:04:28.171 "num_blocks": 16384, 00:04:28.171 "uuid": "a0c59c94-9822-4a15-9934-6ee48b862a5e", 00:04:28.171 "assigned_rate_limits": { 00:04:28.171 "rw_ios_per_sec": 0, 00:04:28.171 "rw_mbytes_per_sec": 0, 00:04:28.171 "r_mbytes_per_sec": 0, 00:04:28.171 "w_mbytes_per_sec": 0 00:04:28.171 }, 00:04:28.171 "claimed": false, 00:04:28.171 "zoned": false, 00:04:28.171 "supported_io_types": { 00:04:28.171 "read": true, 00:04:28.171 "write": true, 00:04:28.171 "unmap": true, 00:04:28.171 "write_zeroes": true, 00:04:28.171 "flush": true, 00:04:28.171 "reset": true, 00:04:28.171 "compare": false, 00:04:28.171 "compare_and_write": false, 00:04:28.171 "abort": true, 00:04:28.171 "nvme_admin": false, 00:04:28.171 "nvme_io": false 00:04:28.171 }, 00:04:28.171 "memory_domains": [ 00:04:28.171 { 00:04:28.172 "dma_device_id": "system", 00:04:28.172 "dma_device_type": 1 00:04:28.172 }, 00:04:28.172 { 00:04:28.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.172 "dma_device_type": 2 00:04:28.172 } 00:04:28.172 ], 00:04:28.172 "driver_specific": {} 00:04:28.172 } 00:04:28.172 ]' 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.172 [2024-07-23 10:25:16.629240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.172 [2024-07-23 10:25:16.629275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.172 [2024-07-23 10:25:16.629293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x588b510 00:04:28.172 [2024-07-23 10:25:16.629303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.172 [2024-07-23 10:25:16.630088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.172 [2024-07-23 10:25:16.630110] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.172 Passthru0 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.172 { 00:04:28.172 "name": "Malloc2", 00:04:28.172 "aliases": [ 00:04:28.172 "a0c59c94-9822-4a15-9934-6ee48b862a5e" 00:04:28.172 ], 00:04:28.172 "product_name": "Malloc disk", 00:04:28.172 "block_size": 512, 00:04:28.172 "num_blocks": 16384, 00:04:28.172 "uuid": "a0c59c94-9822-4a15-9934-6ee48b862a5e", 00:04:28.172 "assigned_rate_limits": { 00:04:28.172 "rw_ios_per_sec": 0, 00:04:28.172 "rw_mbytes_per_sec": 0, 00:04:28.172 "r_mbytes_per_sec": 0, 00:04:28.172 "w_mbytes_per_sec": 0 00:04:28.172 }, 00:04:28.172 "claimed": true, 00:04:28.172 "claim_type": "exclusive_write", 00:04:28.172 "zoned": false, 00:04:28.172 "supported_io_types": { 00:04:28.172 "read": true, 00:04:28.172 "write": true, 00:04:28.172 "unmap": true, 00:04:28.172 "write_zeroes": true, 00:04:28.172 "flush": true, 00:04:28.172 "reset": true, 00:04:28.172 "compare": false, 00:04:28.172 "compare_and_write": false, 00:04:28.172 "abort": true, 00:04:28.172 "nvme_admin": false, 00:04:28.172 "nvme_io": false 00:04:28.172 }, 00:04:28.172 "memory_domains": [ 00:04:28.172 { 00:04:28.172 "dma_device_id": "system", 00:04:28.172 "dma_device_type": 1 00:04:28.172 }, 00:04:28.172 { 00:04:28.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.172 "dma_device_type": 2 00:04:28.172 } 00:04:28.172 ], 00:04:28.172 "driver_specific": {} 00:04:28.172 }, 00:04:28.172 { 00:04:28.172 "name": "Passthru0", 00:04:28.172 "aliases": [ 00:04:28.172 "65debff7-79e5-5d76-90c3-540026a838aa" 00:04:28.172 ], 00:04:28.172 "product_name": "passthru", 00:04:28.172 "block_size": 512, 00:04:28.172 "num_blocks": 16384, 00:04:28.172 "uuid": "65debff7-79e5-5d76-90c3-540026a838aa", 00:04:28.172 "assigned_rate_limits": { 00:04:28.172 "rw_ios_per_sec": 0, 00:04:28.172 "rw_mbytes_per_sec": 0, 00:04:28.172 "r_mbytes_per_sec": 0, 00:04:28.172 "w_mbytes_per_sec": 0 00:04:28.172 }, 00:04:28.172 "claimed": false, 00:04:28.172 "zoned": false, 00:04:28.172 "supported_io_types": { 00:04:28.172 "read": true, 00:04:28.172 "write": true, 00:04:28.172 "unmap": true, 00:04:28.172 "write_zeroes": true, 00:04:28.172 "flush": true, 00:04:28.172 "reset": true, 00:04:28.172 "compare": false, 00:04:28.172 "compare_and_write": false, 00:04:28.172 "abort": true, 00:04:28.172 "nvme_admin": false, 00:04:28.172 "nvme_io": false 00:04:28.172 }, 00:04:28.172 "memory_domains": [ 00:04:28.172 { 00:04:28.172 "dma_device_id": "system", 00:04:28.172 "dma_device_type": 1 00:04:28.172 }, 00:04:28.172 { 00:04:28.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.172 "dma_device_type": 2 00:04:28.172 } 00:04:28.172 ], 00:04:28.172 "driver_specific": { 00:04:28.172 "passthru": { 00:04:28.172 "name": "Passthru0", 00:04:28.172 "base_bdev_name": "Malloc2" 00:04:28.172 } 00:04:28.172 } 00:04:28.172 } 00:04:28.172 ]' 00:04:28.172 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.432 00:04:28.432 real 0m0.280s 00:04:28.432 user 0m0.180s 00:04:28.432 sys 0m0.045s 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.432 10:25:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.432 ************************************ 00:04:28.432 END TEST rpc_daemon_integrity 00:04:28.432 ************************************ 00:04:28.432 10:25:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.432 10:25:16 rpc -- rpc/rpc.sh@84 -- # killprocess 3410160 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@946 -- # '[' -z 3410160 ']' 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@950 -- # kill -0 3410160 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@951 -- # uname 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410160 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410160' 00:04:28.432 killing process with pid 3410160 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@965 -- # kill 3410160 00:04:28.432 10:25:16 rpc -- common/autotest_common.sh@970 -- # wait 3410160 00:04:28.692 00:04:28.692 real 0m2.018s 00:04:28.692 user 0m2.558s 00:04:28.692 sys 0m0.758s 00:04:28.692 10:25:17 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.692 10:25:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.692 ************************************ 00:04:28.692 END TEST rpc 00:04:28.692 ************************************ 00:04:28.951 10:25:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.951 10:25:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.951 10:25:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.951 10:25:17 -- common/autotest_common.sh@10 -- # set +x 00:04:28.951 ************************************ 00:04:28.951 START TEST skip_rpc 00:04:28.951 ************************************ 00:04:28.951 10:25:17 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.951 * Looking for test storage... 00:04:28.951 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:28.952 10:25:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:28.952 10:25:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:28.952 10:25:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:28.952 10:25:17 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.952 10:25:17 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.952 10:25:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.952 ************************************ 00:04:28.952 START TEST skip_rpc 00:04:28.952 ************************************ 00:04:28.952 10:25:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:28.952 10:25:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:28.952 10:25:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3410613 00:04:28.952 10:25:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.952 10:25:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:28.952 [2024-07-23 10:25:17.400611] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:28.952 [2024-07-23 10:25:17.400665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410613 ] 00:04:28.952 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.211 [2024-07-23 10:25:17.467134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.211 [2024-07-23 10:25:17.511104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3410613 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3410613 ']' 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3410613 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410613 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410613' 00:04:34.485 killing process with pid 3410613 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3410613 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3410613 00:04:34.485 00:04:34.485 real 0m5.382s 00:04:34.485 user 0m5.119s 00:04:34.485 sys 0m0.288s 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.485 10:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.485 ************************************ 00:04:34.485 END TEST skip_rpc 00:04:34.485 ************************************ 00:04:34.485 10:25:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.485 10:25:22 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.485 10:25:22 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.485 10:25:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.485 ************************************ 00:04:34.485 START TEST skip_rpc_with_json 00:04:34.485 ************************************ 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3411334 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3411334 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3411334 ']' 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:34.485 10:25:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.486 10:25:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:34.486 10:25:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.486 [2024-07-23 10:25:22.863254] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:34.486 [2024-07-23 10:25:22.863315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411334 ] 00:04:34.486 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.486 [2024-07-23 10:25:22.926765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.486 [2024-07-23 10:25:22.972744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.746 [2024-07-23 10:25:23.185889] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.746 request: 00:04:34.746 { 00:04:34.746 "trtype": "tcp", 00:04:34.746 "method": "nvmf_get_transports", 00:04:34.746 "req_id": 1 00:04:34.746 } 00:04:34.746 Got JSON-RPC error response 00:04:34.746 response: 00:04:34.746 { 00:04:34.746 "code": -19, 00:04:34.746 "message": "No such device" 00:04:34.746 } 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.746 [2024-07-23 10:25:23.197986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.746 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.005 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.005 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:35.005 { 00:04:35.005 "subsystems": [ 00:04:35.005 { 00:04:35.005 "subsystem": "scheduler", 00:04:35.005 "config": [ 00:04:35.005 { 00:04:35.005 "method": "framework_set_scheduler", 00:04:35.005 "params": { 00:04:35.005 "name": "static" 00:04:35.005 } 00:04:35.005 } 00:04:35.005 ] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "vmd", 00:04:35.005 "config": [] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "sock", 00:04:35.005 "config": [ 00:04:35.005 { 00:04:35.005 "method": "sock_set_default_impl", 00:04:35.005 "params": { 00:04:35.005 "impl_name": "posix" 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "sock_impl_set_options", 00:04:35.005 "params": { 00:04:35.005 "impl_name": "ssl", 00:04:35.005 "recv_buf_size": 4096, 00:04:35.005 "send_buf_size": 4096, 00:04:35.005 "enable_recv_pipe": true, 00:04:35.005 "enable_quickack": false, 00:04:35.005 "enable_placement_id": 0, 00:04:35.005 "enable_zerocopy_send_server": true, 00:04:35.005 "enable_zerocopy_send_client": false, 00:04:35.005 "zerocopy_threshold": 0, 00:04:35.005 "tls_version": 0, 00:04:35.005 "enable_ktls": false 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "sock_impl_set_options", 00:04:35.005 "params": { 00:04:35.005 "impl_name": "posix", 00:04:35.005 "recv_buf_size": 2097152, 00:04:35.005 "send_buf_size": 2097152, 00:04:35.005 "enable_recv_pipe": true, 00:04:35.005 "enable_quickack": false, 00:04:35.005 "enable_placement_id": 0, 00:04:35.005 "enable_zerocopy_send_server": true, 00:04:35.005 "enable_zerocopy_send_client": false, 00:04:35.005 "zerocopy_threshold": 0, 00:04:35.005 "tls_version": 0, 00:04:35.005 "enable_ktls": false 00:04:35.005 } 00:04:35.005 } 00:04:35.005 ] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "iobuf", 00:04:35.005 "config": [ 00:04:35.005 { 00:04:35.005 "method": "iobuf_set_options", 00:04:35.005 "params": { 00:04:35.005 "small_pool_count": 8192, 00:04:35.005 "large_pool_count": 1024, 00:04:35.005 "small_bufsize": 8192, 00:04:35.005 "large_bufsize": 135168 00:04:35.005 } 00:04:35.005 } 00:04:35.005 ] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "keyring", 00:04:35.005 "config": [] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "vfio_user_target", 00:04:35.005 "config": null 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "accel", 00:04:35.005 "config": [ 00:04:35.005 { 00:04:35.005 "method": "accel_set_options", 00:04:35.005 "params": { 00:04:35.005 "small_cache_size": 128, 00:04:35.005 "large_cache_size": 16, 00:04:35.005 "task_count": 2048, 00:04:35.005 "sequence_count": 2048, 00:04:35.005 "buf_count": 2048 00:04:35.005 } 00:04:35.005 } 00:04:35.005 ] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "bdev", 00:04:35.005 "config": [ 00:04:35.005 { 00:04:35.005 "method": "bdev_set_options", 00:04:35.005 "params": { 00:04:35.005 "bdev_io_pool_size": 65535, 00:04:35.005 "bdev_io_cache_size": 256, 00:04:35.005 "bdev_auto_examine": true, 00:04:35.005 "iobuf_small_cache_size": 128, 00:04:35.005 "iobuf_large_cache_size": 16 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "bdev_raid_set_options", 00:04:35.005 "params": { 00:04:35.005 "process_window_size_kb": 1024 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "bdev_nvme_set_options", 00:04:35.005 "params": { 00:04:35.005 "action_on_timeout": "none", 00:04:35.005 "timeout_us": 0, 00:04:35.005 "timeout_admin_us": 0, 00:04:35.005 "keep_alive_timeout_ms": 10000, 00:04:35.005 "arbitration_burst": 0, 00:04:35.005 "low_priority_weight": 0, 00:04:35.005 "medium_priority_weight": 0, 00:04:35.005 "high_priority_weight": 0, 00:04:35.005 "nvme_adminq_poll_period_us": 10000, 00:04:35.005 "nvme_ioq_poll_period_us": 0, 00:04:35.005 "io_queue_requests": 0, 00:04:35.005 "delay_cmd_submit": true, 00:04:35.005 "transport_retry_count": 4, 00:04:35.005 "bdev_retry_count": 3, 00:04:35.005 "transport_ack_timeout": 0, 00:04:35.005 "ctrlr_loss_timeout_sec": 0, 00:04:35.005 "reconnect_delay_sec": 0, 00:04:35.005 "fast_io_fail_timeout_sec": 0, 00:04:35.005 "disable_auto_failback": false, 00:04:35.005 "generate_uuids": false, 00:04:35.005 "transport_tos": 0, 00:04:35.005 "nvme_error_stat": false, 00:04:35.005 "rdma_srq_size": 0, 00:04:35.005 "io_path_stat": false, 00:04:35.005 "allow_accel_sequence": false, 00:04:35.005 "rdma_max_cq_size": 0, 00:04:35.005 "rdma_cm_event_timeout_ms": 0, 00:04:35.005 "dhchap_digests": [ 00:04:35.005 "sha256", 00:04:35.005 "sha384", 00:04:35.005 "sha512" 00:04:35.005 ], 00:04:35.005 "dhchap_dhgroups": [ 00:04:35.005 "null", 00:04:35.005 "ffdhe2048", 00:04:35.005 "ffdhe3072", 00:04:35.005 "ffdhe4096", 00:04:35.005 "ffdhe6144", 00:04:35.005 "ffdhe8192" 00:04:35.005 ] 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "bdev_nvme_set_hotplug", 00:04:35.005 "params": { 00:04:35.005 "period_us": 100000, 00:04:35.005 "enable": false 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "bdev_iscsi_set_options", 00:04:35.005 "params": { 00:04:35.005 "timeout_sec": 30 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "bdev_wait_for_examine" 00:04:35.005 } 00:04:35.005 ] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "nvmf", 00:04:35.005 "config": [ 00:04:35.005 { 00:04:35.005 "method": "nvmf_set_config", 00:04:35.005 "params": { 00:04:35.005 "discovery_filter": "match_any", 00:04:35.005 "admin_cmd_passthru": { 00:04:35.005 "identify_ctrlr": false 00:04:35.005 } 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "nvmf_set_max_subsystems", 00:04:35.005 "params": { 00:04:35.005 "max_subsystems": 1024 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "nvmf_set_crdt", 00:04:35.005 "params": { 00:04:35.005 "crdt1": 0, 00:04:35.005 "crdt2": 0, 00:04:35.005 "crdt3": 0 00:04:35.005 } 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "method": "nvmf_create_transport", 00:04:35.005 "params": { 00:04:35.005 "trtype": "TCP", 00:04:35.005 "max_queue_depth": 128, 00:04:35.005 "max_io_qpairs_per_ctrlr": 127, 00:04:35.005 "in_capsule_data_size": 4096, 00:04:35.005 "max_io_size": 131072, 00:04:35.005 "io_unit_size": 131072, 00:04:35.005 "max_aq_depth": 128, 00:04:35.005 "num_shared_buffers": 511, 00:04:35.005 "buf_cache_size": 4294967295, 00:04:35.005 "dif_insert_or_strip": false, 00:04:35.005 "zcopy": false, 00:04:35.005 "c2h_success": true, 00:04:35.005 "sock_priority": 0, 00:04:35.005 "abort_timeout_sec": 1, 00:04:35.005 "ack_timeout": 0, 00:04:35.005 "data_wr_pool_size": 0 00:04:35.005 } 00:04:35.005 } 00:04:35.005 ] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "nbd", 00:04:35.005 "config": [] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "ublk", 00:04:35.005 "config": [] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "vhost_blk", 00:04:35.005 "config": [] 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "scsi", 00:04:35.005 "config": null 00:04:35.005 }, 00:04:35.005 { 00:04:35.005 "subsystem": "iscsi", 00:04:35.005 "config": [ 00:04:35.005 { 00:04:35.005 "method": "iscsi_set_options", 00:04:35.005 "params": { 00:04:35.005 "node_base": "iqn.2016-06.io.spdk", 00:04:35.005 "max_sessions": 128, 00:04:35.005 "max_connections_per_session": 2, 00:04:35.005 "max_queue_depth": 64, 00:04:35.005 "default_time2wait": 2, 00:04:35.005 "default_time2retain": 20, 00:04:35.005 "first_burst_length": 8192, 00:04:35.005 "immediate_data": true, 00:04:35.006 "allow_duplicated_isid": false, 00:04:35.006 "error_recovery_level": 0, 00:04:35.006 "nop_timeout": 60, 00:04:35.006 "nop_in_interval": 30, 00:04:35.006 "disable_chap": false, 00:04:35.006 "require_chap": false, 00:04:35.006 "mutual_chap": false, 00:04:35.006 "chap_group": 0, 00:04:35.006 "max_large_datain_per_connection": 64, 00:04:35.006 "max_r2t_per_connection": 4, 00:04:35.006 "pdu_pool_size": 36864, 00:04:35.006 "immediate_data_pool_size": 16384, 00:04:35.006 "data_out_pool_size": 2048 00:04:35.006 } 00:04:35.006 } 00:04:35.006 ] 00:04:35.006 }, 00:04:35.006 { 00:04:35.006 "subsystem": "vhost_scsi", 00:04:35.006 "config": [] 00:04:35.006 } 00:04:35.006 ] 00:04:35.006 } 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3411334 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3411334 ']' 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3411334 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3411334 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3411334' 00:04:35.006 killing process with pid 3411334 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3411334 00:04:35.006 10:25:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3411334 00:04:35.265 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:35.265 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3411466 00:04:35.265 10:25:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.540 10:25:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3411466 00:04:40.540 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3411466 ']' 00:04:40.540 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3411466 00:04:40.540 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:40.541 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:40.541 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3411466 00:04:40.541 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:40.541 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:40.541 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3411466' 00:04:40.541 killing process with pid 3411466 00:04:40.541 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3411466 00:04:40.541 10:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3411466 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:40.801 00:04:40.801 real 0m6.276s 00:04:40.801 user 0m5.898s 00:04:40.801 sys 0m0.648s 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.801 ************************************ 00:04:40.801 END TEST skip_rpc_with_json 00:04:40.801 ************************************ 00:04:40.801 10:25:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.801 10:25:29 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.801 10:25:29 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.801 10:25:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.801 ************************************ 00:04:40.801 START TEST skip_rpc_with_delay 00:04:40.801 ************************************ 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.801 [2024-07-23 10:25:29.231151] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.801 [2024-07-23 10:25:29.231293] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.801 00:04:40.801 real 0m0.044s 00:04:40.801 user 0m0.019s 00:04:40.801 sys 0m0.025s 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.801 10:25:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.801 ************************************ 00:04:40.801 END TEST skip_rpc_with_delay 00:04:40.801 ************************************ 00:04:40.801 10:25:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.801 10:25:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.801 10:25:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.801 10:25:29 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.801 10:25:29 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.801 10:25:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.061 ************************************ 00:04:41.061 START TEST exit_on_failed_rpc_init 00:04:41.061 ************************************ 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3412251 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3412251 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3412251 ']' 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:41.061 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.061 [2024-07-23 10:25:29.343048] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:41.061 [2024-07-23 10:25:29.343110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412251 ] 00:04:41.061 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.061 [2024-07-23 10:25:29.406453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.061 [2024-07-23 10:25:29.452396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.321 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.322 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.322 [2024-07-23 10:25:29.674144] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:41.322 [2024-07-23 10:25:29.674219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412262 ] 00:04:41.322 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.322 [2024-07-23 10:25:29.743630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.322 [2024-07-23 10:25:29.786741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.322 [2024-07-23 10:25:29.786838] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.322 [2024-07-23 10:25:29.786851] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.322 [2024-07-23 10:25:29.786859] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3412251 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3412251 ']' 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3412251 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3412251 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3412251' 00:04:41.581 killing process with pid 3412251 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3412251 00:04:41.581 10:25:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3412251 00:04:41.841 00:04:41.841 real 0m0.893s 00:04:41.841 user 0m0.881s 00:04:41.841 sys 0m0.421s 00:04:41.841 10:25:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.841 10:25:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.841 ************************************ 00:04:41.841 END TEST exit_on_failed_rpc_init 00:04:41.841 ************************************ 00:04:41.841 10:25:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:41.841 00:04:41.841 real 0m13.010s 00:04:41.841 user 0m12.091s 00:04:41.841 sys 0m1.656s 00:04:41.841 10:25:30 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.841 10:25:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.841 ************************************ 00:04:41.841 END TEST skip_rpc 00:04:41.841 ************************************ 00:04:41.841 10:25:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:41.841 10:25:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.841 10:25:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.841 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:04:42.100 ************************************ 00:04:42.100 START TEST rpc_client 00:04:42.100 ************************************ 00:04:42.100 10:25:30 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:42.100 * Looking for test storage... 00:04:42.101 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:42.101 10:25:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:42.101 OK 00:04:42.101 10:25:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.101 00:04:42.101 real 0m0.093s 00:04:42.101 user 0m0.029s 00:04:42.101 sys 0m0.069s 00:04:42.101 10:25:30 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.101 10:25:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.101 ************************************ 00:04:42.101 END TEST rpc_client 00:04:42.101 ************************************ 00:04:42.101 10:25:30 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.101 10:25:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.101 10:25:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.101 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:04:42.101 ************************************ 00:04:42.101 START TEST json_config 00:04:42.101 ************************************ 00:04:42.101 10:25:30 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.101 10:25:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.101 10:25:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:42.361 10:25:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.361 10:25:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.361 10:25:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.361 10:25:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.361 10:25:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.361 10:25:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.361 10:25:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.361 10:25:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@47 -- # : 0 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:42.361 10:25:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:42.361 10:25:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:42.361 10:25:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.361 10:25:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.361 10:25:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.361 10:25:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.361 10:25:30 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:42.361 WARNING: No tests are enabled so not running JSON configuration tests 00:04:42.361 10:25:30 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:42.361 00:04:42.361 real 0m0.110s 00:04:42.361 user 0m0.057s 00:04:42.361 sys 0m0.055s 00:04:42.361 10:25:30 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.361 10:25:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.361 ************************************ 00:04:42.361 END TEST json_config 00:04:42.361 ************************************ 00:04:42.361 10:25:30 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.361 10:25:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.361 10:25:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.361 10:25:30 -- common/autotest_common.sh@10 -- # set +x 00:04:42.361 ************************************ 00:04:42.362 START TEST json_config_extra_key 00:04:42.362 ************************************ 00:04:42.362 10:25:30 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:42.362 10:25:30 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.362 10:25:30 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.362 10:25:30 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.362 10:25:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.362 10:25:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.362 10:25:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.362 10:25:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:42.362 10:25:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:42.362 10:25:30 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:42.362 INFO: launching applications... 00:04:42.362 10:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3412582 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.362 Waiting for target to run... 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3412582 /var/tmp/spdk_tgt.sock 00:04:42.362 10:25:30 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3412582 ']' 00:04:42.362 10:25:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.362 10:25:30 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.362 10:25:30 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:42.362 10:25:30 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.362 10:25:30 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:42.362 10:25:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.362 [2024-07-23 10:25:30.835913] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:42.362 [2024-07-23 10:25:30.836012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412582 ] 00:04:42.621 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.880 [2024-07-23 10:25:31.154889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.880 [2024-07-23 10:25:31.178377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.448 10:25:31 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:43.448 10:25:31 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.448 00:04:43.448 10:25:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.448 INFO: shutting down applications... 00:04:43.448 10:25:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3412582 ]] 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3412582 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3412582 00:04:43.448 10:25:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.707 10:25:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.707 10:25:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.707 10:25:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3412582 00:04:43.707 10:25:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.707 10:25:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.707 10:25:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.707 10:25:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.707 SPDK target shutdown done 00:04:43.707 10:25:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.707 Success 00:04:43.707 00:04:43.707 real 0m1.464s 00:04:43.707 user 0m1.200s 00:04:43.707 sys 0m0.446s 00:04:43.707 10:25:32 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.707 10:25:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.707 ************************************ 00:04:43.707 END TEST json_config_extra_key 00:04:43.707 ************************************ 00:04:43.967 10:25:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.967 10:25:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.967 10:25:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.967 10:25:32 -- common/autotest_common.sh@10 -- # set +x 00:04:43.967 ************************************ 00:04:43.967 START TEST alias_rpc 00:04:43.967 ************************************ 00:04:43.967 10:25:32 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.967 * Looking for test storage... 00:04:43.967 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:43.967 10:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.967 10:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3412811 00:04:43.967 10:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3412811 00:04:43.967 10:25:32 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3412811 ']' 00:04:43.967 10:25:32 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.967 10:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.967 10:25:32 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:43.967 10:25:32 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.967 10:25:32 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:43.967 10:25:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.967 [2024-07-23 10:25:32.385985] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:43.967 [2024-07-23 10:25:32.386080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412811 ] 00:04:43.967 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.967 [2024-07-23 10:25:32.458523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.226 [2024-07-23 10:25:32.500471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.226 10:25:32 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:44.226 10:25:32 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:44.226 10:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:44.493 10:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3412811 00:04:44.493 10:25:32 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3412811 ']' 00:04:44.493 10:25:32 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3412811 00:04:44.493 10:25:32 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:44.493 10:25:32 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:44.493 10:25:32 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3412811 00:04:44.494 10:25:32 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:44.494 10:25:32 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:44.494 10:25:32 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3412811' 00:04:44.494 killing process with pid 3412811 00:04:44.494 10:25:32 alias_rpc -- common/autotest_common.sh@965 -- # kill 3412811 00:04:44.494 10:25:32 alias_rpc -- common/autotest_common.sh@970 -- # wait 3412811 00:04:44.753 00:04:44.753 real 0m0.993s 00:04:44.753 user 0m0.935s 00:04:44.753 sys 0m0.412s 00:04:44.753 10:25:33 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.753 10:25:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.753 ************************************ 00:04:44.753 END TEST alias_rpc 00:04:44.753 ************************************ 00:04:45.010 10:25:33 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:45.010 10:25:33 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:45.010 10:25:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.010 10:25:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.010 10:25:33 -- common/autotest_common.sh@10 -- # set +x 00:04:45.010 ************************************ 00:04:45.010 START TEST spdkcli_tcp 00:04:45.010 ************************************ 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:45.010 * Looking for test storage... 00:04:45.010 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3413049 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3413049 00:04:45.010 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3413049 ']' 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:45.010 10:25:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.010 [2024-07-23 10:25:33.450271] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:45.010 [2024-07-23 10:25:33.450340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413049 ] 00:04:45.010 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.268 [2024-07-23 10:25:33.519986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.268 [2024-07-23 10:25:33.562153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.268 [2024-07-23 10:25:33.562156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.268 10:25:33 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.268 10:25:33 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:45.268 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3413060 00:04:45.268 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.268 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:45.527 [ 00:04:45.527 "spdk_get_version", 00:04:45.527 "rpc_get_methods", 00:04:45.527 "trace_get_info", 00:04:45.527 "trace_get_tpoint_group_mask", 00:04:45.527 "trace_disable_tpoint_group", 00:04:45.527 "trace_enable_tpoint_group", 00:04:45.527 "trace_clear_tpoint_mask", 00:04:45.527 "trace_set_tpoint_mask", 00:04:45.527 "vfu_tgt_set_base_path", 00:04:45.527 "framework_get_pci_devices", 00:04:45.527 "framework_get_config", 00:04:45.527 "framework_get_subsystems", 00:04:45.527 "keyring_get_keys", 00:04:45.527 "iobuf_get_stats", 00:04:45.527 "iobuf_set_options", 00:04:45.527 "sock_get_default_impl", 00:04:45.527 "sock_set_default_impl", 00:04:45.527 "sock_impl_set_options", 00:04:45.527 "sock_impl_get_options", 00:04:45.527 "vmd_rescan", 00:04:45.527 "vmd_remove_device", 00:04:45.527 "vmd_enable", 00:04:45.527 "accel_get_stats", 00:04:45.527 "accel_set_options", 00:04:45.527 "accel_set_driver", 00:04:45.527 "accel_crypto_key_destroy", 00:04:45.527 "accel_crypto_keys_get", 00:04:45.527 "accel_crypto_key_create", 00:04:45.527 "accel_assign_opc", 00:04:45.527 "accel_get_module_info", 00:04:45.527 "accel_get_opc_assignments", 00:04:45.527 "notify_get_notifications", 00:04:45.527 "notify_get_types", 00:04:45.527 "bdev_get_histogram", 00:04:45.528 "bdev_enable_histogram", 00:04:45.528 "bdev_set_qos_limit", 00:04:45.528 "bdev_set_qd_sampling_period", 00:04:45.528 "bdev_get_bdevs", 00:04:45.528 "bdev_reset_iostat", 00:04:45.528 "bdev_get_iostat", 00:04:45.528 "bdev_examine", 00:04:45.528 "bdev_wait_for_examine", 00:04:45.528 "bdev_set_options", 00:04:45.528 "scsi_get_devices", 00:04:45.528 "thread_set_cpumask", 00:04:45.528 "framework_get_scheduler", 00:04:45.528 "framework_set_scheduler", 00:04:45.528 "framework_get_reactors", 00:04:45.528 "thread_get_io_channels", 00:04:45.528 "thread_get_pollers", 00:04:45.528 "thread_get_stats", 00:04:45.528 "framework_monitor_context_switch", 00:04:45.528 "spdk_kill_instance", 00:04:45.528 "log_enable_timestamps", 00:04:45.528 "log_get_flags", 00:04:45.528 "log_clear_flag", 00:04:45.528 "log_set_flag", 00:04:45.528 "log_get_level", 00:04:45.528 "log_set_level", 00:04:45.528 "log_get_print_level", 00:04:45.528 "log_set_print_level", 00:04:45.528 "framework_enable_cpumask_locks", 00:04:45.528 "framework_disable_cpumask_locks", 00:04:45.528 "framework_wait_init", 00:04:45.528 "framework_start_init", 00:04:45.528 "virtio_blk_create_transport", 00:04:45.528 "virtio_blk_get_transports", 00:04:45.528 "vhost_controller_set_coalescing", 00:04:45.528 "vhost_get_controllers", 00:04:45.528 "vhost_delete_controller", 00:04:45.528 "vhost_create_blk_controller", 00:04:45.528 "vhost_scsi_controller_remove_target", 00:04:45.528 "vhost_scsi_controller_add_target", 00:04:45.528 "vhost_start_scsi_controller", 00:04:45.528 "vhost_create_scsi_controller", 00:04:45.528 "ublk_recover_disk", 00:04:45.528 "ublk_get_disks", 00:04:45.528 "ublk_stop_disk", 00:04:45.528 "ublk_start_disk", 00:04:45.528 "ublk_destroy_target", 00:04:45.528 "ublk_create_target", 00:04:45.528 "nbd_get_disks", 00:04:45.528 "nbd_stop_disk", 00:04:45.528 "nbd_start_disk", 00:04:45.528 "env_dpdk_get_mem_stats", 00:04:45.528 "nvmf_stop_mdns_prr", 00:04:45.528 "nvmf_publish_mdns_prr", 00:04:45.528 "nvmf_subsystem_get_listeners", 00:04:45.528 "nvmf_subsystem_get_qpairs", 00:04:45.528 "nvmf_subsystem_get_controllers", 00:04:45.528 "nvmf_get_stats", 00:04:45.528 "nvmf_get_transports", 00:04:45.528 "nvmf_create_transport", 00:04:45.528 "nvmf_get_targets", 00:04:45.528 "nvmf_delete_target", 00:04:45.528 "nvmf_create_target", 00:04:45.528 "nvmf_subsystem_allow_any_host", 00:04:45.528 "nvmf_subsystem_remove_host", 00:04:45.528 "nvmf_subsystem_add_host", 00:04:45.528 "nvmf_ns_remove_host", 00:04:45.528 "nvmf_ns_add_host", 00:04:45.528 "nvmf_subsystem_remove_ns", 00:04:45.528 "nvmf_subsystem_add_ns", 00:04:45.528 "nvmf_subsystem_listener_set_ana_state", 00:04:45.528 "nvmf_discovery_get_referrals", 00:04:45.528 "nvmf_discovery_remove_referral", 00:04:45.528 "nvmf_discovery_add_referral", 00:04:45.528 "nvmf_subsystem_remove_listener", 00:04:45.528 "nvmf_subsystem_add_listener", 00:04:45.528 "nvmf_delete_subsystem", 00:04:45.528 "nvmf_create_subsystem", 00:04:45.528 "nvmf_get_subsystems", 00:04:45.528 "nvmf_set_crdt", 00:04:45.528 "nvmf_set_config", 00:04:45.528 "nvmf_set_max_subsystems", 00:04:45.528 "iscsi_get_histogram", 00:04:45.528 "iscsi_enable_histogram", 00:04:45.528 "iscsi_set_options", 00:04:45.528 "iscsi_get_auth_groups", 00:04:45.528 "iscsi_auth_group_remove_secret", 00:04:45.528 "iscsi_auth_group_add_secret", 00:04:45.528 "iscsi_delete_auth_group", 00:04:45.528 "iscsi_create_auth_group", 00:04:45.528 "iscsi_set_discovery_auth", 00:04:45.528 "iscsi_get_options", 00:04:45.528 "iscsi_target_node_request_logout", 00:04:45.528 "iscsi_target_node_set_redirect", 00:04:45.528 "iscsi_target_node_set_auth", 00:04:45.528 "iscsi_target_node_add_lun", 00:04:45.528 "iscsi_get_stats", 00:04:45.528 "iscsi_get_connections", 00:04:45.528 "iscsi_portal_group_set_auth", 00:04:45.528 "iscsi_start_portal_group", 00:04:45.528 "iscsi_delete_portal_group", 00:04:45.528 "iscsi_create_portal_group", 00:04:45.528 "iscsi_get_portal_groups", 00:04:45.528 "iscsi_delete_target_node", 00:04:45.528 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.528 "iscsi_target_node_add_pg_ig_maps", 00:04:45.528 "iscsi_create_target_node", 00:04:45.528 "iscsi_get_target_nodes", 00:04:45.528 "iscsi_delete_initiator_group", 00:04:45.528 "iscsi_initiator_group_remove_initiators", 00:04:45.528 "iscsi_initiator_group_add_initiators", 00:04:45.528 "iscsi_create_initiator_group", 00:04:45.528 "iscsi_get_initiator_groups", 00:04:45.528 "keyring_linux_set_options", 00:04:45.528 "keyring_file_remove_key", 00:04:45.528 "keyring_file_add_key", 00:04:45.528 "vfu_virtio_create_scsi_endpoint", 00:04:45.528 "vfu_virtio_scsi_remove_target", 00:04:45.528 "vfu_virtio_scsi_add_target", 00:04:45.528 "vfu_virtio_create_blk_endpoint", 00:04:45.528 "vfu_virtio_delete_endpoint", 00:04:45.528 "iaa_scan_accel_module", 00:04:45.528 "dsa_scan_accel_module", 00:04:45.528 "ioat_scan_accel_module", 00:04:45.528 "accel_error_inject_error", 00:04:45.528 "bdev_iscsi_delete", 00:04:45.528 "bdev_iscsi_create", 00:04:45.528 "bdev_iscsi_set_options", 00:04:45.528 "bdev_virtio_attach_controller", 00:04:45.528 "bdev_virtio_scsi_get_devices", 00:04:45.528 "bdev_virtio_detach_controller", 00:04:45.528 "bdev_virtio_blk_set_hotplug", 00:04:45.528 "bdev_ftl_set_property", 00:04:45.528 "bdev_ftl_get_properties", 00:04:45.528 "bdev_ftl_get_stats", 00:04:45.528 "bdev_ftl_unmap", 00:04:45.528 "bdev_ftl_unload", 00:04:45.528 "bdev_ftl_delete", 00:04:45.528 "bdev_ftl_load", 00:04:45.528 "bdev_ftl_create", 00:04:45.528 "bdev_aio_delete", 00:04:45.528 "bdev_aio_rescan", 00:04:45.528 "bdev_aio_create", 00:04:45.528 "blobfs_create", 00:04:45.528 "blobfs_detect", 00:04:45.528 "blobfs_set_cache_size", 00:04:45.528 "bdev_zone_block_delete", 00:04:45.528 "bdev_zone_block_create", 00:04:45.528 "bdev_delay_delete", 00:04:45.528 "bdev_delay_create", 00:04:45.528 "bdev_delay_update_latency", 00:04:45.528 "bdev_split_delete", 00:04:45.528 "bdev_split_create", 00:04:45.528 "bdev_error_inject_error", 00:04:45.528 "bdev_error_delete", 00:04:45.528 "bdev_error_create", 00:04:45.528 "bdev_raid_set_options", 00:04:45.528 "bdev_raid_remove_base_bdev", 00:04:45.528 "bdev_raid_add_base_bdev", 00:04:45.528 "bdev_raid_delete", 00:04:45.528 "bdev_raid_create", 00:04:45.528 "bdev_raid_get_bdevs", 00:04:45.528 "bdev_lvol_set_parent_bdev", 00:04:45.528 "bdev_lvol_set_parent", 00:04:45.528 "bdev_lvol_check_shallow_copy", 00:04:45.528 "bdev_lvol_start_shallow_copy", 00:04:45.528 "bdev_lvol_grow_lvstore", 00:04:45.528 "bdev_lvol_get_lvols", 00:04:45.528 "bdev_lvol_get_lvstores", 00:04:45.528 "bdev_lvol_delete", 00:04:45.528 "bdev_lvol_set_read_only", 00:04:45.528 "bdev_lvol_resize", 00:04:45.528 "bdev_lvol_decouple_parent", 00:04:45.528 "bdev_lvol_inflate", 00:04:45.528 "bdev_lvol_rename", 00:04:45.528 "bdev_lvol_clone_bdev", 00:04:45.528 "bdev_lvol_clone", 00:04:45.528 "bdev_lvol_snapshot", 00:04:45.528 "bdev_lvol_create", 00:04:45.528 "bdev_lvol_delete_lvstore", 00:04:45.528 "bdev_lvol_rename_lvstore", 00:04:45.528 "bdev_lvol_create_lvstore", 00:04:45.528 "bdev_passthru_delete", 00:04:45.528 "bdev_passthru_create", 00:04:45.528 "bdev_nvme_cuse_unregister", 00:04:45.528 "bdev_nvme_cuse_register", 00:04:45.528 "bdev_opal_new_user", 00:04:45.528 "bdev_opal_set_lock_state", 00:04:45.528 "bdev_opal_delete", 00:04:45.528 "bdev_opal_get_info", 00:04:45.528 "bdev_opal_create", 00:04:45.528 "bdev_nvme_opal_revert", 00:04:45.528 "bdev_nvme_opal_init", 00:04:45.528 "bdev_nvme_send_cmd", 00:04:45.528 "bdev_nvme_get_path_iostat", 00:04:45.528 "bdev_nvme_get_mdns_discovery_info", 00:04:45.528 "bdev_nvme_stop_mdns_discovery", 00:04:45.528 "bdev_nvme_start_mdns_discovery", 00:04:45.528 "bdev_nvme_set_multipath_policy", 00:04:45.528 "bdev_nvme_set_preferred_path", 00:04:45.528 "bdev_nvme_get_io_paths", 00:04:45.528 "bdev_nvme_remove_error_injection", 00:04:45.528 "bdev_nvme_add_error_injection", 00:04:45.528 "bdev_nvme_get_discovery_info", 00:04:45.528 "bdev_nvme_stop_discovery", 00:04:45.528 "bdev_nvme_start_discovery", 00:04:45.528 "bdev_nvme_get_controller_health_info", 00:04:45.528 "bdev_nvme_disable_controller", 00:04:45.528 "bdev_nvme_enable_controller", 00:04:45.528 "bdev_nvme_reset_controller", 00:04:45.528 "bdev_nvme_get_transport_statistics", 00:04:45.528 "bdev_nvme_apply_firmware", 00:04:45.528 "bdev_nvme_detach_controller", 00:04:45.528 "bdev_nvme_get_controllers", 00:04:45.528 "bdev_nvme_attach_controller", 00:04:45.528 "bdev_nvme_set_hotplug", 00:04:45.528 "bdev_nvme_set_options", 00:04:45.528 "bdev_null_resize", 00:04:45.528 "bdev_null_delete", 00:04:45.528 "bdev_null_create", 00:04:45.528 "bdev_malloc_delete", 00:04:45.528 "bdev_malloc_create" 00:04:45.528 ] 00:04:45.528 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.528 10:25:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.529 10:25:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.529 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.529 10:25:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3413049 00:04:45.529 10:25:33 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3413049 ']' 00:04:45.529 10:25:33 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3413049 00:04:45.529 10:25:33 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:45.529 10:25:33 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.529 10:25:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3413049 00:04:45.529 10:25:34 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:45.529 10:25:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:45.529 10:25:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3413049' 00:04:45.529 killing process with pid 3413049 00:04:45.529 10:25:34 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3413049 00:04:45.529 10:25:34 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3413049 00:04:46.097 00:04:46.097 real 0m1.017s 00:04:46.097 user 0m1.701s 00:04:46.097 sys 0m0.458s 00:04:46.097 10:25:34 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.097 10:25:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.097 ************************************ 00:04:46.097 END TEST spdkcli_tcp 00:04:46.097 ************************************ 00:04:46.097 10:25:34 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.097 10:25:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.097 10:25:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.097 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:04:46.097 ************************************ 00:04:46.097 START TEST dpdk_mem_utility 00:04:46.097 ************************************ 00:04:46.097 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.097 * Looking for test storage... 00:04:46.097 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:46.097 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.097 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3413295 00:04:46.097 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3413295 00:04:46.097 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.097 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3413295 ']' 00:04:46.097 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.097 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:46.097 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.097 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:46.097 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.097 [2024-07-23 10:25:34.530076] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:46.097 [2024-07-23 10:25:34.530142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413295 ] 00:04:46.097 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.355 [2024-07-23 10:25:34.598630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.355 [2024-07-23 10:25:34.640323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.355 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:46.356 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:46.356 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.356 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.356 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.356 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.356 { 00:04:46.356 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.356 } 00:04:46.356 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.356 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.615 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:46.615 1 heaps totaling size 814.000000 MiB 00:04:46.615 size: 814.000000 MiB heap id: 0 00:04:46.615 end heaps---------- 00:04:46.615 8 mempools totaling size 598.116089 MiB 00:04:46.615 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.615 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.615 size: 84.521057 MiB name: bdev_io_3413295 00:04:46.615 size: 51.011292 MiB name: evtpool_3413295 00:04:46.615 size: 50.003479 MiB name: msgpool_3413295 00:04:46.615 size: 21.763794 MiB name: PDU_Pool 00:04:46.615 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.615 size: 0.026123 MiB name: Session_Pool 00:04:46.615 end mempools------- 00:04:46.615 6 memzones totaling size 4.142822 MiB 00:04:46.615 size: 1.000366 MiB name: RG_ring_0_3413295 00:04:46.615 size: 1.000366 MiB name: RG_ring_1_3413295 00:04:46.615 size: 1.000366 MiB name: RG_ring_4_3413295 00:04:46.615 size: 1.000366 MiB name: RG_ring_5_3413295 00:04:46.615 size: 0.125366 MiB name: RG_ring_2_3413295 00:04:46.615 size: 0.015991 MiB name: RG_ring_3_3413295 00:04:46.615 end memzones------- 00:04:46.615 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.615 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:46.615 list of free elements. size: 12.519348 MiB 00:04:46.615 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:46.615 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:46.615 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:46.615 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:46.615 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:46.615 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:46.615 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:46.615 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:46.615 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:46.615 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:46.615 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:46.615 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:46.615 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:46.615 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:46.615 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:46.615 list of standard malloc elements. size: 199.218079 MiB 00:04:46.615 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:46.615 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:46.615 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:46.615 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:46.615 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:46.615 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:46.615 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:46.615 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:46.615 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:46.615 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:46.615 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:46.615 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:46.615 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:46.615 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:46.615 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:46.615 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:46.615 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:46.615 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:46.615 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:46.615 list of memzone associated elements. size: 602.262573 MiB 00:04:46.615 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:46.615 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.615 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:46.615 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.615 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:46.615 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3413295_0 00:04:46.615 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:46.615 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3413295_0 00:04:46.615 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:46.615 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3413295_0 00:04:46.615 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:46.615 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.615 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:46.615 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.615 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:46.615 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3413295 00:04:46.615 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:46.615 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3413295 00:04:46.615 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:46.615 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3413295 00:04:46.615 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:46.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.615 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:46.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.615 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:46.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.615 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:46.615 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.615 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:46.615 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3413295 00:04:46.615 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:46.615 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3413295 00:04:46.615 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:46.615 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3413295 00:04:46.615 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:46.615 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3413295 00:04:46.615 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:46.615 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3413295 00:04:46.615 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:46.616 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.616 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:46.616 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.616 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:46.616 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.616 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:46.616 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3413295 00:04:46.616 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:46.616 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.616 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:46.616 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.616 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:46.616 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3413295 00:04:46.616 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:46.616 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.616 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:46.616 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3413295 00:04:46.616 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:46.616 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3413295 00:04:46.616 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:46.616 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.616 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.616 10:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3413295 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3413295 ']' 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3413295 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3413295 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3413295' 00:04:46.616 killing process with pid 3413295 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3413295 00:04:46.616 10:25:34 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3413295 00:04:46.875 00:04:46.875 real 0m0.864s 00:04:46.875 user 0m0.776s 00:04:46.875 sys 0m0.391s 00:04:46.875 10:25:35 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.875 10:25:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.875 ************************************ 00:04:46.875 END TEST dpdk_mem_utility 00:04:46.875 ************************************ 00:04:46.875 10:25:35 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:46.875 10:25:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.875 10:25:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.875 10:25:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.875 ************************************ 00:04:46.875 START TEST event 00:04:46.875 ************************************ 00:04:46.875 10:25:35 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:47.134 * Looking for test storage... 00:04:47.134 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:47.134 10:25:35 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:47.134 10:25:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.134 10:25:35 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.134 10:25:35 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:47.134 10:25:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.134 10:25:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.134 ************************************ 00:04:47.134 START TEST event_perf 00:04:47.134 ************************************ 00:04:47.134 10:25:35 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.134 Running I/O for 1 seconds...[2024-07-23 10:25:35.499684] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:47.134 [2024-07-23 10:25:35.499799] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413519 ] 00:04:47.134 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.134 [2024-07-23 10:25:35.572757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.134 [2024-07-23 10:25:35.619702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.134 [2024-07-23 10:25:35.619803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.134 [2024-07-23 10:25:35.619847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.134 [2024-07-23 10:25:35.619849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.510 Running I/O for 1 seconds... 00:04:48.510 lcore 0: 196731 00:04:48.510 lcore 1: 196730 00:04:48.510 lcore 2: 196728 00:04:48.510 lcore 3: 196729 00:04:48.510 done. 00:04:48.510 00:04:48.510 real 0m1.202s 00:04:48.510 user 0m4.093s 00:04:48.510 sys 0m0.107s 00:04:48.510 10:25:36 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.510 10:25:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.510 ************************************ 00:04:48.510 END TEST event_perf 00:04:48.510 ************************************ 00:04:48.510 10:25:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.510 10:25:36 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:48.510 10:25:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.510 10:25:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.510 ************************************ 00:04:48.510 START TEST event_reactor 00:04:48.510 ************************************ 00:04:48.510 10:25:36 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.510 [2024-07-23 10:25:36.768249] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:48.511 [2024-07-23 10:25:36.768330] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413662 ] 00:04:48.511 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.511 [2024-07-23 10:25:36.840464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.511 [2024-07-23 10:25:36.884041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.448 test_start 00:04:49.448 oneshot 00:04:49.448 tick 100 00:04:49.448 tick 100 00:04:49.448 tick 250 00:04:49.448 tick 100 00:04:49.448 tick 100 00:04:49.448 tick 100 00:04:49.448 tick 250 00:04:49.448 tick 500 00:04:49.448 tick 100 00:04:49.448 tick 100 00:04:49.448 tick 250 00:04:49.448 tick 100 00:04:49.448 tick 100 00:04:49.448 test_end 00:04:49.448 00:04:49.448 real 0m1.196s 00:04:49.448 user 0m1.091s 00:04:49.448 sys 0m0.101s 00:04:49.448 10:25:37 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.448 10:25:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:49.448 ************************************ 00:04:49.448 END TEST event_reactor 00:04:49.448 ************************************ 00:04:49.708 10:25:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.708 10:25:37 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:49.708 10:25:37 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.708 10:25:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.708 ************************************ 00:04:49.708 START TEST event_reactor_perf 00:04:49.708 ************************************ 00:04:49.708 10:25:38 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.708 [2024-07-23 10:25:38.035893] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:49.708 [2024-07-23 10:25:38.036008] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413803 ] 00:04:49.708 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.708 [2024-07-23 10:25:38.108115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.708 [2024-07-23 10:25:38.151794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.085 test_start 00:04:51.086 test_end 00:04:51.086 Performance: 955203 events per second 00:04:51.086 00:04:51.086 real 0m1.197s 00:04:51.086 user 0m1.099s 00:04:51.086 sys 0m0.094s 00:04:51.086 10:25:39 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.086 10:25:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.086 ************************************ 00:04:51.086 END TEST event_reactor_perf 00:04:51.086 ************************************ 00:04:51.086 10:25:39 event -- event/event.sh@49 -- # uname -s 00:04:51.086 10:25:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:51.086 10:25:39 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.086 10:25:39 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.086 10:25:39 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.086 10:25:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.086 ************************************ 00:04:51.086 START TEST event_scheduler 00:04:51.086 ************************************ 00:04:51.086 10:25:39 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.086 * Looking for test storage... 00:04:51.086 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:51.086 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:51.086 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3414026 00:04:51.086 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.086 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:51.086 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3414026 00:04:51.086 10:25:39 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3414026 ']' 00:04:51.086 10:25:39 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.086 10:25:39 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:51.086 10:25:39 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.086 10:25:39 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:51.086 10:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.086 [2024-07-23 10:25:39.427765] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:51.086 [2024-07-23 10:25:39.427869] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414026 ] 00:04:51.086 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.086 [2024-07-23 10:25:39.496647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.086 [2024-07-23 10:25:39.544236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.086 [2024-07-23 10:25:39.544253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.086 [2024-07-23 10:25:39.544270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.086 [2024-07-23 10:25:39.544272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:51.346 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 POWER: Env isn't set yet! 00:04:51.346 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:51.346 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:51.346 POWER: Cannot set governor of lcore 0 to userspace 00:04:51.346 POWER: Attempting to initialise PSTAT power management... 00:04:51.346 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:51.346 POWER: Initialized successfully for lcore 0 power management 00:04:51.346 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:51.346 POWER: Initialized successfully for lcore 1 power management 00:04:51.346 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:51.346 POWER: Initialized successfully for lcore 2 power management 00:04:51.346 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:51.346 POWER: Initialized successfully for lcore 3 power management 00:04:51.346 [2024-07-23 10:25:39.639047] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:51.346 [2024-07-23 10:25:39.639063] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:51.346 [2024-07-23 10:25:39.639074] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 [2024-07-23 10:25:39.704765] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 ************************************ 00:04:51.346 START TEST scheduler_create_thread 00:04:51.346 ************************************ 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 2 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 3 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 4 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 5 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 6 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 7 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 8 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 9 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.346 10 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.346 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.606 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.606 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:51.606 10:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:51.606 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.606 10:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.545 10:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.546 10:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:52.546 10:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.546 10:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.925 10:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.925 10:25:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:53.925 10:25:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:53.925 10:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.925 10:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.863 10:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.863 00:04:54.863 real 0m3.381s 00:04:54.863 user 0m0.026s 00:04:54.863 sys 0m0.005s 00:04:54.863 10:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.863 10:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.863 ************************************ 00:04:54.863 END TEST scheduler_create_thread 00:04:54.863 ************************************ 00:04:54.863 10:25:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:54.863 10:25:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3414026 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3414026 ']' 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3414026 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3414026 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3414026' 00:04:54.863 killing process with pid 3414026 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3414026 00:04:54.863 10:25:43 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3414026 00:04:55.122 [2024-07-23 10:25:43.504595] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:55.122 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:55.122 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:55.122 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:55.122 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:55.122 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:55.122 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:55.122 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:55.122 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:55.382 00:04:55.382 real 0m4.443s 00:04:55.382 user 0m7.843s 00:04:55.382 sys 0m0.413s 00:04:55.382 10:25:43 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.382 10:25:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.382 ************************************ 00:04:55.382 END TEST event_scheduler 00:04:55.382 ************************************ 00:04:55.382 10:25:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:55.382 10:25:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:55.382 10:25:43 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.382 10:25:43 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.382 10:25:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.382 ************************************ 00:04:55.382 START TEST app_repeat 00:04:55.382 ************************************ 00:04:55.382 10:25:43 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3414764 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3414764' 00:04:55.382 Process app_repeat pid: 3414764 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:55.382 spdk_app_start Round 0 00:04:55.382 10:25:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3414764 /var/tmp/spdk-nbd.sock 00:04:55.382 10:25:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3414764 ']' 00:04:55.382 10:25:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.382 10:25:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:55.382 10:25:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.382 10:25:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:55.382 10:25:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.382 [2024-07-23 10:25:43.864986] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:55.382 [2024-07-23 10:25:43.865073] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414764 ] 00:04:55.641 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.641 [2024-07-23 10:25:43.938134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.641 [2024-07-23 10:25:43.984792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.641 [2024-07-23 10:25:43.984793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.641 10:25:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.641 10:25:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:55.641 10:25:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.900 Malloc0 00:04:55.901 10:25:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.160 Malloc1 00:04:56.160 10:25:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.160 /dev/nbd0 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.160 1+0 records in 00:04:56.160 1+0 records out 00:04:56.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261012 s, 15.7 MB/s 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:56.160 10:25:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.160 10:25:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.419 /dev/nbd1 00:04:56.419 10:25:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.419 10:25:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.419 1+0 records in 00:04:56.419 1+0 records out 00:04:56.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253359 s, 16.2 MB/s 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:56.419 10:25:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:56.419 10:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.419 10:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.419 10:25:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.419 10:25:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.419 10:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.678 { 00:04:56.678 "nbd_device": "/dev/nbd0", 00:04:56.678 "bdev_name": "Malloc0" 00:04:56.678 }, 00:04:56.678 { 00:04:56.678 "nbd_device": "/dev/nbd1", 00:04:56.678 "bdev_name": "Malloc1" 00:04:56.678 } 00:04:56.678 ]' 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.678 { 00:04:56.678 "nbd_device": "/dev/nbd0", 00:04:56.678 "bdev_name": "Malloc0" 00:04:56.678 }, 00:04:56.678 { 00:04:56.678 "nbd_device": "/dev/nbd1", 00:04:56.678 "bdev_name": "Malloc1" 00:04:56.678 } 00:04:56.678 ]' 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.678 /dev/nbd1' 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.678 /dev/nbd1' 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.678 256+0 records in 00:04:56.678 256+0 records out 00:04:56.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115661 s, 90.7 MB/s 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.678 256+0 records in 00:04:56.678 256+0 records out 00:04:56.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213711 s, 49.1 MB/s 00:04:56.678 10:25:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.679 256+0 records in 00:04:56.679 256+0 records out 00:04:56.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230448 s, 45.5 MB/s 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.679 10:25:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.937 10:25:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.196 10:25:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.500 10:25:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.500 10:25:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.500 10:25:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.794 [2024-07-23 10:25:46.154606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.794 [2024-07-23 10:25:46.196850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.794 [2024-07-23 10:25:46.196852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.794 [2024-07-23 10:25:46.243903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.794 [2024-07-23 10:25:46.243956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.082 10:25:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.082 10:25:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:01.082 spdk_app_start Round 1 00:05:01.082 10:25:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3414764 /var/tmp/spdk-nbd.sock 00:05:01.082 10:25:48 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3414764 ']' 00:05:01.082 10:25:48 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.082 10:25:48 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:01.082 10:25:48 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.082 10:25:48 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:01.082 10:25:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.082 10:25:49 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:01.082 10:25:49 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:01.082 10:25:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.082 Malloc0 00:05:01.082 10:25:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.082 Malloc1 00:05:01.082 10:25:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.082 10:25:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.341 /dev/nbd0 00:05:01.341 10:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.341 10:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.341 1+0 records in 00:05:01.341 1+0 records out 00:05:01.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224176 s, 18.3 MB/s 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:01.341 10:25:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:01.341 10:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.341 10:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.341 10:25:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.599 /dev/nbd1 00:05:01.599 10:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.599 10:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.599 1+0 records in 00:05:01.599 1+0 records out 00:05:01.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263144 s, 15.6 MB/s 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:01.599 10:25:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:01.599 10:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.599 10:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.599 10:25:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.599 10:25:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.600 10:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.859 { 00:05:01.859 "nbd_device": "/dev/nbd0", 00:05:01.859 "bdev_name": "Malloc0" 00:05:01.859 }, 00:05:01.859 { 00:05:01.859 "nbd_device": "/dev/nbd1", 00:05:01.859 "bdev_name": "Malloc1" 00:05:01.859 } 00:05:01.859 ]' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.859 { 00:05:01.859 "nbd_device": "/dev/nbd0", 00:05:01.859 "bdev_name": "Malloc0" 00:05:01.859 }, 00:05:01.859 { 00:05:01.859 "nbd_device": "/dev/nbd1", 00:05:01.859 "bdev_name": "Malloc1" 00:05:01.859 } 00:05:01.859 ]' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.859 /dev/nbd1' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.859 /dev/nbd1' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.859 256+0 records in 00:05:01.859 256+0 records out 00:05:01.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00362598 s, 289 MB/s 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.859 256+0 records in 00:05:01.859 256+0 records out 00:05:01.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210984 s, 49.7 MB/s 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.859 256+0 records in 00:05:01.859 256+0 records out 00:05:01.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02254 s, 46.5 MB/s 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.859 10:25:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.118 10:25:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.377 10:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.637 10:25:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.637 10:25:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.896 10:25:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.896 [2024-07-23 10:25:51.322020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.896 [2024-07-23 10:25:51.364258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.896 [2024-07-23 10:25:51.364263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.156 [2024-07-23 10:25:51.412193] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.156 [2024-07-23 10:25:51.412240] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.691 10:25:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.691 10:25:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.691 spdk_app_start Round 2 00:05:05.691 10:25:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3414764 /var/tmp/spdk-nbd.sock 00:05:05.691 10:25:54 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3414764 ']' 00:05:05.691 10:25:54 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.691 10:25:54 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:05.691 10:25:54 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.691 10:25:54 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:05.691 10:25:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.950 10:25:54 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:05.950 10:25:54 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:05.950 10:25:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.209 Malloc0 00:05:06.209 10:25:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.209 Malloc1 00:05:06.209 10:25:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.209 10:25:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.468 /dev/nbd0 00:05:06.468 10:25:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.468 10:25:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.468 1+0 records in 00:05:06.468 1+0 records out 00:05:06.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225416 s, 18.2 MB/s 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:06.468 10:25:54 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:06.468 10:25:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.468 10:25:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.468 10:25:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.727 /dev/nbd1 00:05:06.728 10:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.728 10:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.728 1+0 records in 00:05:06.728 1+0 records out 00:05:06.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255544 s, 16.0 MB/s 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:06.728 10:25:55 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:06.728 10:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.728 10:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.728 10:25:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.728 10:25:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.728 10:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.987 10:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.987 { 00:05:06.987 "nbd_device": "/dev/nbd0", 00:05:06.987 "bdev_name": "Malloc0" 00:05:06.987 }, 00:05:06.987 { 00:05:06.987 "nbd_device": "/dev/nbd1", 00:05:06.987 "bdev_name": "Malloc1" 00:05:06.987 } 00:05:06.987 ]' 00:05:06.987 10:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.987 { 00:05:06.987 "nbd_device": "/dev/nbd0", 00:05:06.987 "bdev_name": "Malloc0" 00:05:06.987 }, 00:05:06.987 { 00:05:06.987 "nbd_device": "/dev/nbd1", 00:05:06.987 "bdev_name": "Malloc1" 00:05:06.987 } 00:05:06.987 ]' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.988 /dev/nbd1' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.988 /dev/nbd1' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.988 256+0 records in 00:05:06.988 256+0 records out 00:05:06.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108116 s, 97.0 MB/s 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.988 256+0 records in 00:05:06.988 256+0 records out 00:05:06.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020877 s, 50.2 MB/s 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.988 256+0 records in 00:05:06.988 256+0 records out 00:05:06.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227703 s, 46.1 MB/s 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.988 10:25:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.247 10:25:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.506 10:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.766 10:25:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.766 10:25:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.766 10:25:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.025 [2024-07-23 10:25:56.417491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.025 [2024-07-23 10:25:56.459415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.025 [2024-07-23 10:25:56.459418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.025 [2024-07-23 10:25:56.506354] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.025 [2024-07-23 10:25:56.506408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.315 10:25:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3414764 /var/tmp/spdk-nbd.sock 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3414764 ']' 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.315 10:25:59 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:11.315 10:25:59 event.app_repeat -- event/event.sh@39 -- # killprocess 3414764 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3414764 ']' 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3414764 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3414764 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3414764' 00:05:11.316 killing process with pid 3414764 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3414764 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3414764 00:05:11.316 spdk_app_start is called in Round 0. 00:05:11.316 Shutdown signal received, stop current app iteration 00:05:11.316 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:05:11.316 spdk_app_start is called in Round 1. 00:05:11.316 Shutdown signal received, stop current app iteration 00:05:11.316 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:05:11.316 spdk_app_start is called in Round 2. 00:05:11.316 Shutdown signal received, stop current app iteration 00:05:11.316 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:05:11.316 spdk_app_start is called in Round 3. 00:05:11.316 Shutdown signal received, stop current app iteration 00:05:11.316 10:25:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.316 10:25:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.316 00:05:11.316 real 0m15.780s 00:05:11.316 user 0m33.535s 00:05:11.316 sys 0m3.178s 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.316 10:25:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.316 ************************************ 00:05:11.316 END TEST app_repeat 00:05:11.316 ************************************ 00:05:11.316 10:25:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.316 10:25:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.316 10:25:59 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.316 10:25:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.316 10:25:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.316 ************************************ 00:05:11.316 START TEST cpu_locks 00:05:11.316 ************************************ 00:05:11.316 10:25:59 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.316 * Looking for test storage... 00:05:11.316 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:11.316 10:25:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.316 10:25:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.316 10:25:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.316 10:25:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.316 10:25:59 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.316 10:25:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.316 10:25:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.575 ************************************ 00:05:11.575 START TEST default_locks 00:05:11.575 ************************************ 00:05:11.575 10:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3417003 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3417003 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3417003 ']' 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.576 10:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.576 [2024-07-23 10:25:59.863165] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:11.576 [2024-07-23 10:25:59.863240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417003 ] 00:05:11.576 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.576 [2024-07-23 10:25:59.937701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.576 [2024-07-23 10:25:59.981250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.835 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.835 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:11.835 10:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3417003 00:05:11.835 10:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3417003 00:05:11.835 10:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.404 lslocks: write error 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3417003 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3417003 ']' 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3417003 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417003 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417003' 00:05:12.404 killing process with pid 3417003 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3417003 00:05:12.404 10:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3417003 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3417003 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3417003 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3417003 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3417003 ']' 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.664 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3417003) - No such process 00:05:12.664 ERROR: process (pid: 3417003) is no longer running 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.664 00:05:12.664 real 0m1.320s 00:05:12.664 user 0m1.290s 00:05:12.664 sys 0m0.639s 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.664 10:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.664 ************************************ 00:05:12.664 END TEST default_locks 00:05:12.664 ************************************ 00:05:12.924 10:26:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.924 10:26:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.924 10:26:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.924 10:26:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.924 ************************************ 00:05:12.924 START TEST default_locks_via_rpc 00:05:12.924 ************************************ 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3417237 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3417237 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3417237 ']' 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:12.924 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.924 [2024-07-23 10:26:01.269967] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:12.924 [2024-07-23 10:26:01.270057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417237 ] 00:05:12.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.924 [2024-07-23 10:26:01.340490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.924 [2024-07-23 10:26:01.385540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.182 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:13.182 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:13.182 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.182 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.182 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3417237 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3417237 00:05:13.183 10:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3417237 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3417237 ']' 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3417237 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417237 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417237' 00:05:13.751 killing process with pid 3417237 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3417237 00:05:13.751 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3417237 00:05:14.011 00:05:14.011 real 0m1.216s 00:05:14.011 user 0m1.132s 00:05:14.011 sys 0m0.584s 00:05:14.011 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.011 10:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.011 ************************************ 00:05:14.011 END TEST default_locks_via_rpc 00:05:14.011 ************************************ 00:05:14.011 10:26:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.011 10:26:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.011 10:26:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.011 10:26:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.271 ************************************ 00:05:14.271 START TEST non_locking_app_on_locked_coremask 00:05:14.271 ************************************ 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3417432 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3417432 /var/tmp/spdk.sock 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3417432 ']' 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.271 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.271 [2024-07-23 10:26:02.567023] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:14.271 [2024-07-23 10:26:02.567107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417432 ] 00:05:14.271 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.271 [2024-07-23 10:26:02.641218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.271 [2024-07-23 10:26:02.683187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3417582 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3417582 /var/tmp/spdk2.sock 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3417582 ']' 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.531 10:26:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 [2024-07-23 10:26:02.893679] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:14.531 [2024-07-23 10:26:02.893738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417582 ] 00:05:14.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.531 [2024-07-23 10:26:02.983360] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.531 [2024-07-23 10:26:02.983390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.791 [2024-07-23 10:26:03.063312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.358 10:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:15.358 10:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:15.358 10:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3417432 00:05:15.358 10:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3417432 00:05:15.358 10:26:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.296 lslocks: write error 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3417432 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3417432 ']' 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3417432 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417432 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417432' 00:05:16.296 killing process with pid 3417432 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3417432 00:05:16.296 10:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3417432 00:05:17.235 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3417582 00:05:17.235 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3417582 ']' 00:05:17.235 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3417582 00:05:17.235 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:17.235 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.235 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417582 00:05:17.236 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.236 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.236 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417582' 00:05:17.236 killing process with pid 3417582 00:05:17.236 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3417582 00:05:17.236 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3417582 00:05:17.496 00:05:17.496 real 0m3.239s 00:05:17.496 user 0m3.305s 00:05:17.496 sys 0m1.221s 00:05:17.496 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.496 10:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.496 ************************************ 00:05:17.496 END TEST non_locking_app_on_locked_coremask 00:05:17.496 ************************************ 00:05:17.496 10:26:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:17.496 10:26:05 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.496 10:26:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.496 10:26:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.496 ************************************ 00:05:17.496 START TEST locking_app_on_unlocked_coremask 00:05:17.496 ************************************ 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3418000 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3418000 /var/tmp/spdk.sock 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418000 ']' 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.496 10:26:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.496 [2024-07-23 10:26:05.876991] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:17.496 [2024-07-23 10:26:05.877054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418000 ] 00:05:17.496 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.496 [2024-07-23 10:26:05.944833] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.496 [2024-07-23 10:26:05.944885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.496 [2024-07-23 10:26:05.989896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3418013 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3418013 /var/tmp/spdk2.sock 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418013 ']' 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.756 10:26:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 [2024-07-23 10:26:06.200105] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:17.756 [2024-07-23 10:26:06.200198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418013 ] 00:05:17.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.016 [2024-07-23 10:26:06.292569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.016 [2024-07-23 10:26:06.371638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.585 10:26:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.585 10:26:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:18.585 10:26:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3418013 00:05:18.585 10:26:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3418013 00:05:18.585 10:26:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.964 lslocks: write error 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3418000 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3418000 ']' 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3418000 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418000 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418000' 00:05:19.964 killing process with pid 3418000 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3418000 00:05:19.964 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3418000 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3418013 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3418013 ']' 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3418013 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418013 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418013' 00:05:20.534 killing process with pid 3418013 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3418013 00:05:20.534 10:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3418013 00:05:21.104 00:05:21.104 real 0m3.470s 00:05:21.104 user 0m3.620s 00:05:21.104 sys 0m1.332s 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.104 ************************************ 00:05:21.104 END TEST locking_app_on_unlocked_coremask 00:05:21.104 ************************************ 00:05:21.104 10:26:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.104 10:26:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.104 10:26:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.104 10:26:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.104 ************************************ 00:05:21.104 START TEST locking_app_on_locked_coremask 00:05:21.104 ************************************ 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3418417 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3418417 /var/tmp/spdk.sock 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418417 ']' 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.104 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.104 [2024-07-23 10:26:09.430803] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:21.104 [2024-07-23 10:26:09.430864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418417 ] 00:05:21.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.104 [2024-07-23 10:26:09.501182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.104 [2024-07-23 10:26:09.546976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3418558 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3418558 /var/tmp/spdk2.sock 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3418558 /var/tmp/spdk2.sock 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3418558 /var/tmp/spdk2.sock 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418558 ']' 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.364 10:26:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.364 [2024-07-23 10:26:09.752370] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:21.364 [2024-07-23 10:26:09.752429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418558 ] 00:05:21.364 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.364 [2024-07-23 10:26:09.847073] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3418417 has claimed it. 00:05:21.364 [2024-07-23 10:26:09.847113] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.933 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3418558) - No such process 00:05:21.933 ERROR: process (pid: 3418558) is no longer running 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3418417 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3418417 00:05:21.933 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.502 lslocks: write error 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3418417 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3418417 ']' 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3418417 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418417 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418417' 00:05:22.502 killing process with pid 3418417 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3418417 00:05:22.502 10:26:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3418417 00:05:22.762 00:05:22.762 real 0m1.805s 00:05:22.762 user 0m1.868s 00:05:22.762 sys 0m0.677s 00:05:22.762 10:26:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.762 10:26:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.762 ************************************ 00:05:22.762 END TEST locking_app_on_locked_coremask 00:05:22.762 ************************************ 00:05:22.762 10:26:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:22.762 10:26:11 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.762 10:26:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.762 10:26:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.022 ************************************ 00:05:23.022 START TEST locking_overlapped_coremask 00:05:23.022 ************************************ 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3418805 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3418805 /var/tmp/spdk.sock 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418805 ']' 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.022 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.022 [2024-07-23 10:26:11.322042] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:23.022 [2024-07-23 10:26:11.322113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418805 ] 00:05:23.022 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.022 [2024-07-23 10:26:11.392445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.022 [2024-07-23 10:26:11.438869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.022 [2024-07-23 10:26:11.438960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.022 [2024-07-23 10:26:11.438963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3418810 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3418810 /var/tmp/spdk2.sock 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3418810 /var/tmp/spdk2.sock 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3418810 /var/tmp/spdk2.sock 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3418810 ']' 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.281 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.282 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.282 10:26:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 [2024-07-23 10:26:11.661803] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:23.282 [2024-07-23 10:26:11.661883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418810 ] 00:05:23.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.282 [2024-07-23 10:26:11.759196] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3418805 has claimed it. 00:05:23.282 [2024-07-23 10:26:11.759240] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.850 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3418810) - No such process 00:05:23.850 ERROR: process (pid: 3418810) is no longer running 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3418805 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3418805 ']' 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3418805 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.850 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418805 00:05:24.110 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.110 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.110 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418805' 00:05:24.110 killing process with pid 3418805 00:05:24.110 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3418805 00:05:24.110 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3418805 00:05:24.370 00:05:24.370 real 0m1.408s 00:05:24.370 user 0m3.799s 00:05:24.370 sys 0m0.441s 00:05:24.370 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.370 10:26:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.370 ************************************ 00:05:24.370 END TEST locking_overlapped_coremask 00:05:24.370 ************************************ 00:05:24.370 10:26:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:24.370 10:26:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.370 10:26:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.370 10:26:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.370 ************************************ 00:05:24.370 START TEST locking_overlapped_coremask_via_rpc 00:05:24.370 ************************************ 00:05:24.370 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:24.370 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:24.370 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3419023 00:05:24.371 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3419023 /var/tmp/spdk.sock 00:05:24.371 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419023 ']' 00:05:24.371 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.371 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.371 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.371 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.371 10:26:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.371 [2024-07-23 10:26:12.806691] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:24.371 [2024-07-23 10:26:12.806760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419023 ] 00:05:24.371 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.630 [2024-07-23 10:26:12.877068] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.630 [2024-07-23 10:26:12.877097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.630 [2024-07-23 10:26:12.923312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.630 [2024-07-23 10:26:12.923399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.630 [2024-07-23 10:26:12.923402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.630 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.630 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:24.630 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3419033 00:05:24.630 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3419033 /var/tmp/spdk2.sock 00:05:24.631 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:24.631 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419033 ']' 00:05:24.631 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.631 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.631 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.631 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.631 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.890 [2024-07-23 10:26:13.141229] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:24.890 [2024-07-23 10:26:13.141300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419033 ] 00:05:24.890 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.890 [2024-07-23 10:26:13.234744] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.890 [2024-07-23 10:26:13.234781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.890 [2024-07-23 10:26:13.316382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.890 [2024-07-23 10:26:13.319824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.890 [2024-07-23 10:26:13.319825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:25.827 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.827 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:25.827 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.827 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.827 10:26:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.827 [2024-07-23 10:26:14.015840] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3419023 has claimed it. 00:05:25.827 request: 00:05:25.827 { 00:05:25.827 "method": "framework_enable_cpumask_locks", 00:05:25.827 "req_id": 1 00:05:25.827 } 00:05:25.827 Got JSON-RPC error response 00:05:25.827 response: 00:05:25.827 { 00:05:25.827 "code": -32603, 00:05:25.827 "message": "Failed to claim CPU core: 2" 00:05:25.827 } 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3419023 /var/tmp/spdk.sock 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419023 ']' 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3419033 /var/tmp/spdk2.sock 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3419033 ']' 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.827 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.087 00:05:26.087 real 0m1.611s 00:05:26.087 user 0m0.729s 00:05:26.087 sys 0m0.168s 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.087 10:26:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.087 ************************************ 00:05:26.087 END TEST locking_overlapped_coremask_via_rpc 00:05:26.087 ************************************ 00:05:26.087 10:26:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:26.087 10:26:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3419023 ]] 00:05:26.087 10:26:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3419023 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419023 ']' 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419023 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419023 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419023' 00:05:26.087 killing process with pid 3419023 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3419023 00:05:26.087 10:26:14 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3419023 00:05:26.346 10:26:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3419033 ]] 00:05:26.346 10:26:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3419033 00:05:26.346 10:26:14 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419033 ']' 00:05:26.346 10:26:14 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419033 00:05:26.346 10:26:14 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:26.346 10:26:14 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.346 10:26:14 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419033 00:05:26.606 10:26:14 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:26.606 10:26:14 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:26.606 10:26:14 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419033' 00:05:26.606 killing process with pid 3419033 00:05:26.606 10:26:14 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3419033 00:05:26.606 10:26:14 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3419033 00:05:26.866 10:26:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.866 10:26:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:26.866 10:26:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3419023 ]] 00:05:26.866 10:26:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3419023 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419023 ']' 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419023 00:05:26.866 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3419023) - No such process 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3419023 is not found' 00:05:26.866 Process with pid 3419023 is not found 00:05:26.866 10:26:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3419033 ]] 00:05:26.866 10:26:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3419033 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3419033 ']' 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3419033 00:05:26.866 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3419033) - No such process 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3419033 is not found' 00:05:26.866 Process with pid 3419033 is not found 00:05:26.866 10:26:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.866 00:05:26.866 real 0m15.532s 00:05:26.866 user 0m25.310s 00:05:26.866 sys 0m6.159s 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.866 10:26:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 ************************************ 00:05:26.866 END TEST cpu_locks 00:05:26.866 ************************************ 00:05:26.866 00:05:26.866 real 0m39.914s 00:05:26.866 user 1m13.157s 00:05:26.866 sys 0m10.474s 00:05:26.866 10:26:15 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.866 10:26:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 ************************************ 00:05:26.866 END TEST event 00:05:26.866 ************************************ 00:05:26.866 10:26:15 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:26.866 10:26:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.866 10:26:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.866 10:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 ************************************ 00:05:26.866 START TEST thread 00:05:26.866 ************************************ 00:05:26.866 10:26:15 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:27.126 * Looking for test storage... 00:05:27.126 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:27.126 10:26:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.126 10:26:15 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:27.126 10:26:15 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.126 10:26:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.126 ************************************ 00:05:27.126 START TEST thread_poller_perf 00:05:27.126 ************************************ 00:05:27.126 10:26:15 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.126 [2024-07-23 10:26:15.491904] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:27.126 [2024-07-23 10:26:15.491970] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419494 ] 00:05:27.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.126 [2024-07-23 10:26:15.560184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.126 [2024-07-23 10:26:15.600420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.126 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:28.505 ====================================== 00:05:28.505 busy:2304247940 (cyc) 00:05:28.506 total_run_count: 835000 00:05:28.506 tsc_hz: 2300000000 (cyc) 00:05:28.506 ====================================== 00:05:28.506 poller_cost: 2759 (cyc), 1199 (nsec) 00:05:28.506 00:05:28.506 real 0m1.185s 00:05:28.506 user 0m1.091s 00:05:28.506 sys 0m0.090s 00:05:28.506 10:26:16 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.506 10:26:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.506 ************************************ 00:05:28.506 END TEST thread_poller_perf 00:05:28.506 ************************************ 00:05:28.506 10:26:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.506 10:26:16 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:28.506 10:26:16 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.506 10:26:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.506 ************************************ 00:05:28.506 START TEST thread_poller_perf 00:05:28.506 ************************************ 00:05:28.506 10:26:16 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.506 [2024-07-23 10:26:16.767236] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:28.506 [2024-07-23 10:26:16.767319] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419693 ] 00:05:28.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.506 [2024-07-23 10:26:16.838235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.506 [2024-07-23 10:26:16.879965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.506 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:29.575 ====================================== 00:05:29.575 busy:2301296814 (cyc) 00:05:29.575 total_run_count: 13810000 00:05:29.575 tsc_hz: 2300000000 (cyc) 00:05:29.575 ====================================== 00:05:29.575 poller_cost: 166 (cyc), 72 (nsec) 00:05:29.575 00:05:29.575 real 0m1.191s 00:05:29.575 user 0m1.102s 00:05:29.575 sys 0m0.085s 00:05:29.575 10:26:17 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.575 10:26:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.575 ************************************ 00:05:29.575 END TEST thread_poller_perf 00:05:29.576 ************************************ 00:05:29.576 10:26:17 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:29.576 10:26:17 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:29.576 10:26:17 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.576 10:26:17 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.576 10:26:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.576 ************************************ 00:05:29.576 START TEST thread_spdk_lock 00:05:29.576 ************************************ 00:05:29.576 10:26:18 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:29.576 [2024-07-23 10:26:18.013636] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:29.576 [2024-07-23 10:26:18.013680] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419878 ] 00:05:29.576 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.837 [2024-07-23 10:26:18.079574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.837 [2024-07-23 10:26:18.124181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.837 [2024-07-23 10:26:18.124185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.406 [2024-07-23 10:26:18.620371] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:30.406 [2024-07-23 10:26:18.620406] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:30.406 [2024-07-23 10:26:18.620416] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x13107c0 00:05:30.406 [2024-07-23 10:26:18.621273] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:30.406 [2024-07-23 10:26:18.621377] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:30.406 [2024-07-23 10:26:18.621396] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:30.406 Starting test contend 00:05:30.406 Worker Delay Wait us Hold us Total us 00:05:30.406 0 3 175999 190218 366218 00:05:30.406 1 5 96852 289525 386377 00:05:30.406 PASS test contend 00:05:30.406 Starting test hold_by_poller 00:05:30.406 PASS test hold_by_poller 00:05:30.406 Starting test hold_by_message 00:05:30.406 PASS test hold_by_message 00:05:30.406 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:30.406 100014 assertions passed 00:05:30.406 0 assertions failed 00:05:30.406 00:05:30.406 real 0m0.675s 00:05:30.406 user 0m1.081s 00:05:30.406 sys 0m0.089s 00:05:30.406 10:26:18 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.406 10:26:18 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:30.406 ************************************ 00:05:30.406 END TEST thread_spdk_lock 00:05:30.406 ************************************ 00:05:30.406 00:05:30.406 real 0m3.375s 00:05:30.406 user 0m3.405s 00:05:30.406 sys 0m0.482s 00:05:30.406 10:26:18 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.406 10:26:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.406 ************************************ 00:05:30.406 END TEST thread 00:05:30.406 ************************************ 00:05:30.406 10:26:18 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:30.406 10:26:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.406 10:26:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.406 10:26:18 -- common/autotest_common.sh@10 -- # set +x 00:05:30.406 ************************************ 00:05:30.406 START TEST accel 00:05:30.406 ************************************ 00:05:30.406 10:26:18 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:30.406 * Looking for test storage... 00:05:30.406 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:30.406 10:26:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:30.406 10:26:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:30.406 10:26:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.406 10:26:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3419966 00:05:30.406 10:26:18 accel -- accel/accel.sh@63 -- # waitforlisten 3419966 00:05:30.406 10:26:18 accel -- common/autotest_common.sh@827 -- # '[' -z 3419966 ']' 00:05:30.406 10:26:18 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.406 10:26:18 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:30.406 10:26:18 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.406 10:26:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:30.406 10:26:18 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.406 10:26:18 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.406 10:26:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.406 10:26:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.406 10:26:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.406 10:26:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.406 10:26:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.406 10:26:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.406 10:26:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:30.406 10:26:18 accel -- accel/accel.sh@41 -- # jq -r . 00:05:30.665 [2024-07-23 10:26:18.911971] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:30.665 [2024-07-23 10:26:18.912039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419966 ] 00:05:30.665 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.665 [2024-07-23 10:26:18.983830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.665 [2024-07-23 10:26:19.026126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.925 10:26:19 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.925 10:26:19 accel -- common/autotest_common.sh@860 -- # return 0 00:05:30.925 10:26:19 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:30.925 10:26:19 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:30.925 10:26:19 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:30.925 10:26:19 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:30.925 10:26:19 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:30.925 10:26:19 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:30.925 10:26:19 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:30.925 10:26:19 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.925 10:26:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.925 10:26:19 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.925 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.925 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.925 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.926 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.926 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.926 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.926 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.926 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.926 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.926 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.926 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.926 10:26:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.926 10:26:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.926 10:26:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.926 10:26:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.926 10:26:19 accel -- accel/accel.sh@75 -- # killprocess 3419966 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@946 -- # '[' -z 3419966 ']' 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@950 -- # kill -0 3419966 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@951 -- # uname 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3419966 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3419966' 00:05:30.926 killing process with pid 3419966 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@965 -- # kill 3419966 00:05:30.926 10:26:19 accel -- common/autotest_common.sh@970 -- # wait 3419966 00:05:31.185 10:26:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:31.185 10:26:19 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:31.185 10:26:19 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:31.185 10:26:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.185 10:26:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.185 10:26:19 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:31.185 10:26:19 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:31.185 10:26:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:31.185 10:26:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.185 10:26:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.445 10:26:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.445 10:26:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.445 10:26:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.445 10:26:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:31.445 10:26:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:31.445 10:26:19 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.445 10:26:19 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:31.445 10:26:19 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:31.445 10:26:19 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:31.445 10:26:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.445 10:26:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.445 ************************************ 00:05:31.445 START TEST accel_missing_filename 00:05:31.445 ************************************ 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.445 10:26:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:31.445 10:26:19 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:31.445 [2024-07-23 10:26:19.810198] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:31.445 [2024-07-23 10:26:19.810284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420180 ] 00:05:31.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.445 [2024-07-23 10:26:19.884575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.445 [2024-07-23 10:26:19.933734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.705 [2024-07-23 10:26:19.981490] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.705 [2024-07-23 10:26:20.052254] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:31.705 A filename is required. 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.705 00:05:31.705 real 0m0.334s 00:05:31.705 user 0m0.224s 00:05:31.705 sys 0m0.149s 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.705 10:26:20 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:31.705 ************************************ 00:05:31.705 END TEST accel_missing_filename 00:05:31.705 ************************************ 00:05:31.705 10:26:20 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:31.705 10:26:20 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:31.705 10:26:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.705 10:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.705 ************************************ 00:05:31.705 START TEST accel_compress_verify 00:05:31.705 ************************************ 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.705 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:31.705 10:26:20 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:31.965 [2024-07-23 10:26:20.218746] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:31.965 [2024-07-23 10:26:20.218836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420205 ] 00:05:31.965 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.965 [2024-07-23 10:26:20.296072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.965 [2024-07-23 10:26:20.336820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.965 [2024-07-23 10:26:20.377388] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.965 [2024-07-23 10:26:20.441790] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:32.224 00:05:32.224 Compression does not support the verify option, aborting. 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.224 00:05:32.224 real 0m0.310s 00:05:32.224 user 0m0.208s 00:05:32.224 sys 0m0.141s 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.224 10:26:20 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:32.224 ************************************ 00:05:32.224 END TEST accel_compress_verify 00:05:32.224 ************************************ 00:05:32.224 10:26:20 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:32.224 10:26:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:32.224 10:26:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.224 10:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.224 ************************************ 00:05:32.224 START TEST accel_wrong_workload 00:05:32.224 ************************************ 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.224 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:32.224 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:32.224 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:32.224 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.225 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.225 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.225 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.225 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.225 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:32.225 10:26:20 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:32.225 Unsupported workload type: foobar 00:05:32.225 [2024-07-23 10:26:20.591149] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:32.225 accel_perf options: 00:05:32.225 [-h help message] 00:05:32.225 [-q queue depth per core] 00:05:32.225 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:32.225 [-T number of threads per core 00:05:32.225 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:32.225 [-t time in seconds] 00:05:32.225 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:32.225 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:32.225 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:32.225 [-l for compress/decompress workloads, name of uncompressed input file 00:05:32.225 [-S for crc32c workload, use this seed value (default 0) 00:05:32.225 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:32.225 [-f for fill workload, use this BYTE value (default 255) 00:05:32.225 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:32.225 [-y verify result if this switch is on] 00:05:32.225 [-a tasks to allocate per core (default: same value as -q)] 00:05:32.225 Can be used to spread operations across a wider range of memory. 00:05:32.225 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:32.225 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.225 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:32.225 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.225 00:05:32.225 real 0m0.025s 00:05:32.225 user 0m0.008s 00:05:32.225 sys 0m0.016s 00:05:32.225 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.225 10:26:20 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:32.225 ************************************ 00:05:32.225 END TEST accel_wrong_workload 00:05:32.225 ************************************ 00:05:32.225 Error: writing output failed: Broken pipe 00:05:32.225 10:26:20 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:32.225 10:26:20 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:32.225 10:26:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.225 10:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.225 ************************************ 00:05:32.225 START TEST accel_negative_buffers 00:05:32.225 ************************************ 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:32.225 10:26:20 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:32.225 -x option must be non-negative. 00:05:32.225 [2024-07-23 10:26:20.677987] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:32.225 accel_perf options: 00:05:32.225 [-h help message] 00:05:32.225 [-q queue depth per core] 00:05:32.225 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:32.225 [-T number of threads per core 00:05:32.225 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:32.225 [-t time in seconds] 00:05:32.225 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:32.225 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:32.225 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:32.225 [-l for compress/decompress workloads, name of uncompressed input file 00:05:32.225 [-S for crc32c workload, use this seed value (default 0) 00:05:32.225 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:32.225 [-f for fill workload, use this BYTE value (default 255) 00:05:32.225 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:32.225 [-y verify result if this switch is on] 00:05:32.225 [-a tasks to allocate per core (default: same value as -q)] 00:05:32.225 Can be used to spread operations across a wider range of memory. 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.225 00:05:32.225 real 0m0.026s 00:05:32.225 user 0m0.012s 00:05:32.225 sys 0m0.013s 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.225 10:26:20 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:32.225 ************************************ 00:05:32.225 END TEST accel_negative_buffers 00:05:32.225 ************************************ 00:05:32.225 Error: writing output failed: Broken pipe 00:05:32.485 10:26:20 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:32.485 10:26:20 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:32.485 10:26:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.485 10:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.485 ************************************ 00:05:32.485 START TEST accel_crc32c 00:05:32.485 ************************************ 00:05:32.485 10:26:20 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:32.485 10:26:20 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:32.485 [2024-07-23 10:26:20.782714] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:32.485 [2024-07-23 10:26:20.782800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420419 ] 00:05:32.485 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.485 [2024-07-23 10:26:20.852688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.486 [2024-07-23 10:26:20.895529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.486 10:26:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.865 10:26:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.865 10:26:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.865 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:33.866 10:26:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.866 00:05:33.866 real 0m1.317s 00:05:33.866 user 0m1.180s 00:05:33.866 sys 0m0.144s 00:05:33.866 10:26:22 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.866 10:26:22 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:33.866 ************************************ 00:05:33.866 END TEST accel_crc32c 00:05:33.866 ************************************ 00:05:33.866 10:26:22 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:33.866 10:26:22 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:33.866 10:26:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.866 10:26:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.866 ************************************ 00:05:33.866 START TEST accel_crc32c_C2 00:05:33.866 ************************************ 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:33.866 [2024-07-23 10:26:22.183550] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:33.866 [2024-07-23 10:26:22.183638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420632 ] 00:05:33.866 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.866 [2024-07-23 10:26:22.254428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.866 [2024-07-23 10:26:22.297575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.866 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.867 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.867 10:26:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.245 00:05:35.245 real 0m1.321s 00:05:35.245 user 0m1.181s 00:05:35.245 sys 0m0.145s 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.245 10:26:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:35.245 ************************************ 00:05:35.245 END TEST accel_crc32c_C2 00:05:35.245 ************************************ 00:05:35.245 10:26:23 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:35.245 10:26:23 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:35.245 10:26:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.245 10:26:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.245 ************************************ 00:05:35.245 START TEST accel_copy 00:05:35.245 ************************************ 00:05:35.245 10:26:23 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:35.246 [2024-07-23 10:26:23.568354] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:35.246 [2024-07-23 10:26:23.568433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420842 ] 00:05:35.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.246 [2024-07-23 10:26:23.641710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.246 [2024-07-23 10:26:23.686138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.246 10:26:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:36.623 10:26:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.623 00:05:36.623 real 0m1.321s 00:05:36.623 user 0m1.185s 00:05:36.623 sys 0m0.141s 00:05:36.623 10:26:24 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.623 10:26:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:36.623 ************************************ 00:05:36.623 END TEST accel_copy 00:05:36.623 ************************************ 00:05:36.623 10:26:24 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.623 10:26:24 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:36.623 10:26:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.623 10:26:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.623 ************************************ 00:05:36.623 START TEST accel_fill 00:05:36.623 ************************************ 00:05:36.623 10:26:24 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:36.623 10:26:24 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:36.623 [2024-07-23 10:26:24.947570] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:36.623 [2024-07-23 10:26:24.947613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421045 ] 00:05:36.623 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.623 [2024-07-23 10:26:25.013584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.623 [2024-07-23 10:26:25.054316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.623 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.624 10:26:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:38.002 10:26:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.002 00:05:38.002 real 0m1.284s 00:05:38.002 user 0m1.158s 00:05:38.002 sys 0m0.131s 00:05:38.002 10:26:26 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.002 10:26:26 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:38.002 ************************************ 00:05:38.002 END TEST accel_fill 00:05:38.002 ************************************ 00:05:38.002 10:26:26 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:38.002 10:26:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:38.002 10:26:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.002 10:26:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.002 ************************************ 00:05:38.002 START TEST accel_copy_crc32c 00:05:38.002 ************************************ 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:38.002 [2024-07-23 10:26:26.324339] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:38.002 [2024-07-23 10:26:26.324422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421243 ] 00:05:38.002 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.002 [2024-07-23 10:26:26.397408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.002 [2024-07-23 10:26:26.441253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.002 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.003 10:26:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.379 00:05:39.379 real 0m1.324s 00:05:39.379 user 0m1.189s 00:05:39.379 sys 0m0.140s 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.379 10:26:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:39.379 ************************************ 00:05:39.379 END TEST accel_copy_crc32c 00:05:39.379 ************************************ 00:05:39.379 10:26:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:39.379 10:26:27 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:39.379 10:26:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.379 10:26:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.379 ************************************ 00:05:39.379 START TEST accel_copy_crc32c_C2 00:05:39.379 ************************************ 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.380 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:39.380 [2024-07-23 10:26:27.729138] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:39.380 [2024-07-23 10:26:27.729221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421448 ] 00:05:39.380 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.380 [2024-07-23 10:26:27.803137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.380 [2024-07-23 10:26:27.851319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.639 10:26:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.576 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.577 00:05:40.577 real 0m1.330s 00:05:40.577 user 0m1.190s 00:05:40.577 sys 0m0.146s 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.577 10:26:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:40.577 ************************************ 00:05:40.577 END TEST accel_copy_crc32c_C2 00:05:40.577 ************************************ 00:05:40.836 10:26:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:40.836 10:26:29 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:40.836 10:26:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.836 10:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.836 ************************************ 00:05:40.836 START TEST accel_dualcast 00:05:40.836 ************************************ 00:05:40.836 10:26:29 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:40.836 [2024-07-23 10:26:29.140524] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:40.836 [2024-07-23 10:26:29.140609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421648 ] 00:05:40.836 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.836 [2024-07-23 10:26:29.213508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.836 [2024-07-23 10:26:29.261964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.837 10:26:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:42.215 10:26:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.215 00:05:42.215 real 0m1.331s 00:05:42.215 user 0m1.181s 00:05:42.215 sys 0m0.154s 00:05:42.215 10:26:30 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.215 10:26:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:42.215 ************************************ 00:05:42.215 END TEST accel_dualcast 00:05:42.215 ************************************ 00:05:42.215 10:26:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:42.215 10:26:30 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:42.215 10:26:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.215 10:26:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.215 ************************************ 00:05:42.215 START TEST accel_compare 00:05:42.215 ************************************ 00:05:42.215 10:26:30 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:42.215 [2024-07-23 10:26:30.552856] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:42.215 [2024-07-23 10:26:30.552937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421847 ] 00:05:42.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.215 [2024-07-23 10:26:30.623736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.215 [2024-07-23 10:26:30.668029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.215 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.475 10:26:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:43.413 10:26:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.413 00:05:43.413 real 0m1.325s 00:05:43.413 user 0m1.195s 00:05:43.413 sys 0m0.143s 00:05:43.413 10:26:31 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.413 10:26:31 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:43.413 ************************************ 00:05:43.413 END TEST accel_compare 00:05:43.413 ************************************ 00:05:43.413 10:26:31 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:43.413 10:26:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:43.413 10:26:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.413 10:26:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.673 ************************************ 00:05:43.673 START TEST accel_xor 00:05:43.673 ************************************ 00:05:43.673 10:26:31 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:43.673 10:26:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:43.673 [2024-07-23 10:26:31.963170] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:43.673 [2024-07-23 10:26:31.963253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422054 ] 00:05:43.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.673 [2024-07-23 10:26:32.035199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.673 [2024-07-23 10:26:32.076200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.673 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.673 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.673 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.673 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.673 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.673 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.674 10:26:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:45.053 10:26:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.053 00:05:45.053 real 0m1.310s 00:05:45.053 user 0m1.188s 00:05:45.053 sys 0m0.136s 00:05:45.053 10:26:33 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.053 10:26:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:45.053 ************************************ 00:05:45.053 END TEST accel_xor 00:05:45.053 ************************************ 00:05:45.053 10:26:33 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:45.053 10:26:33 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:45.053 10:26:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.053 10:26:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.053 ************************************ 00:05:45.053 START TEST accel_xor 00:05:45.054 ************************************ 00:05:45.054 10:26:33 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:45.054 [2024-07-23 10:26:33.343424] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:45.054 [2024-07-23 10:26:33.343505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422257 ] 00:05:45.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.054 [2024-07-23 10:26:33.416105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.054 [2024-07-23 10:26:33.459207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.054 10:26:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:46.433 10:26:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.433 00:05:46.433 real 0m1.321s 00:05:46.433 user 0m1.190s 00:05:46.433 sys 0m0.146s 00:05:46.433 10:26:34 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.433 10:26:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:46.433 ************************************ 00:05:46.433 END TEST accel_xor 00:05:46.433 ************************************ 00:05:46.433 10:26:34 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:46.433 10:26:34 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:46.433 10:26:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.433 10:26:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.433 ************************************ 00:05:46.433 START TEST accel_dif_verify 00:05:46.433 ************************************ 00:05:46.433 10:26:34 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:46.433 [2024-07-23 10:26:34.722711] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:46.433 [2024-07-23 10:26:34.722752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422454 ] 00:05:46.433 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.433 [2024-07-23 10:26:34.789418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.433 [2024-07-23 10:26:34.832802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.434 10:26:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:47.814 10:26:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.814 00:05:47.814 real 0m1.309s 00:05:47.814 user 0m1.192s 00:05:47.814 sys 0m0.133s 00:05:47.814 10:26:36 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.814 10:26:36 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:47.814 ************************************ 00:05:47.814 END TEST accel_dif_verify 00:05:47.814 ************************************ 00:05:47.814 10:26:36 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:47.814 10:26:36 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:47.814 10:26:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.814 10:26:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.814 ************************************ 00:05:47.814 START TEST accel_dif_generate 00:05:47.814 ************************************ 00:05:47.814 10:26:36 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.814 10:26:36 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:47.815 [2024-07-23 10:26:36.125614] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:47.815 [2024-07-23 10:26:36.125719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422657 ] 00:05:47.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.815 [2024-07-23 10:26:36.196685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.815 [2024-07-23 10:26:36.241868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.815 10:26:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.193 10:26:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:49.194 10:26:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.194 00:05:49.194 real 0m1.326s 00:05:49.194 user 0m1.189s 00:05:49.194 sys 0m0.151s 00:05:49.194 10:26:37 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.194 10:26:37 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:49.194 ************************************ 00:05:49.194 END TEST accel_dif_generate 00:05:49.194 ************************************ 00:05:49.194 10:26:37 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:49.194 10:26:37 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:49.194 10:26:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.194 10:26:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.194 ************************************ 00:05:49.194 START TEST accel_dif_generate_copy 00:05:49.194 ************************************ 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:49.194 [2024-07-23 10:26:37.525906] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:49.194 [2024-07-23 10:26:37.525986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422858 ] 00:05:49.194 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.194 [2024-07-23 10:26:37.596600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.194 [2024-07-23 10:26:37.640125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.454 10:26:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.392 00:05:50.392 real 0m1.323s 00:05:50.392 user 0m1.190s 00:05:50.392 sys 0m0.148s 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.392 10:26:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:50.392 ************************************ 00:05:50.392 END TEST accel_dif_generate_copy 00:05:50.392 ************************************ 00:05:50.392 10:26:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:50.392 10:26:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:50.392 10:26:38 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:50.392 10:26:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.392 10:26:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.652 ************************************ 00:05:50.652 START TEST accel_comp 00:05:50.652 ************************************ 00:05:50.652 10:26:38 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:50.652 10:26:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:50.652 [2024-07-23 10:26:38.933279] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:50.652 [2024-07-23 10:26:38.933354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423062 ] 00:05:50.652 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.652 [2024-07-23 10:26:39.004639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.652 [2024-07-23 10:26:39.048068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.652 10:26:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:52.032 10:26:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.032 00:05:52.032 real 0m1.327s 00:05:52.032 user 0m1.196s 00:05:52.032 sys 0m0.145s 00:05:52.032 10:26:40 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.032 10:26:40 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:52.032 ************************************ 00:05:52.032 END TEST accel_comp 00:05:52.032 ************************************ 00:05:52.032 10:26:40 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.032 10:26:40 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:52.032 10:26:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.032 10:26:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.032 ************************************ 00:05:52.032 START TEST accel_decomp 00:05:52.032 ************************************ 00:05:52.032 10:26:40 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:52.032 [2024-07-23 10:26:40.341099] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:52.032 [2024-07-23 10:26:40.341181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423266 ] 00:05:52.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.032 [2024-07-23 10:26:40.411704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.032 [2024-07-23 10:26:40.452936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.032 10:26:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 10:26:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.410 10:26:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.410 00:05:53.410 real 0m1.307s 00:05:53.410 user 0m1.182s 00:05:53.410 sys 0m0.139s 00:05:53.410 10:26:41 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.410 10:26:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:53.410 ************************************ 00:05:53.410 END TEST accel_decomp 00:05:53.410 ************************************ 00:05:53.410 10:26:41 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:53.410 10:26:41 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:53.410 10:26:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.410 10:26:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.410 ************************************ 00:05:53.410 START TEST accel_decmop_full 00:05:53.410 ************************************ 00:05:53.410 10:26:41 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:53.410 [2024-07-23 10:26:41.724314] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:53.410 [2024-07-23 10:26:41.724375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423464 ] 00:05:53.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.410 [2024-07-23 10:26:41.785797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.410 [2024-07-23 10:26:41.829059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.410 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.411 10:26:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.789 10:26:43 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.789 00:05:54.789 real 0m1.314s 00:05:54.789 user 0m1.192s 00:05:54.789 sys 0m0.135s 00:05:54.789 10:26:43 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.789 10:26:43 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:54.789 ************************************ 00:05:54.789 END TEST accel_decmop_full 00:05:54.789 ************************************ 00:05:54.789 10:26:43 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.789 10:26:43 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:54.789 10:26:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.789 10:26:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.789 ************************************ 00:05:54.789 START TEST accel_decomp_mcore 00:05:54.789 ************************************ 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:54.789 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:54.789 [2024-07-23 10:26:43.130086] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:54.789 [2024-07-23 10:26:43.130183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423672 ] 00:05:54.789 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.789 [2024-07-23 10:26:43.201183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.789 [2024-07-23 10:26:43.248952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.789 [2024-07-23 10:26:43.249038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.789 [2024-07-23 10:26:43.249116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.789 [2024-07-23 10:26:43.249118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.048 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.049 10:26:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.987 00:05:55.987 real 0m1.342s 00:05:55.987 user 0m4.580s 00:05:55.987 sys 0m0.153s 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.987 10:26:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:55.987 ************************************ 00:05:55.987 END TEST accel_decomp_mcore 00:05:55.987 ************************************ 00:05:56.247 10:26:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:56.247 10:26:44 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:56.247 10:26:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.247 10:26:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.247 ************************************ 00:05:56.247 START TEST accel_decomp_full_mcore 00:05:56.247 ************************************ 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:56.247 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:56.247 [2024-07-23 10:26:44.556251] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:56.247 [2024-07-23 10:26:44.556338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423872 ] 00:05:56.247 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.247 [2024-07-23 10:26:44.630595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.247 [2024-07-23 10:26:44.682425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.247 [2024-07-23 10:26:44.682516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.247 [2024-07-23 10:26:44.682599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.247 [2024-07-23 10:26:44.682600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.507 10:26:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.447 00:05:57.447 real 0m1.362s 00:05:57.447 user 0m4.618s 00:05:57.447 sys 0m0.167s 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.447 10:26:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:57.447 ************************************ 00:05:57.447 END TEST accel_decomp_full_mcore 00:05:57.447 ************************************ 00:05:57.447 10:26:45 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.447 10:26:45 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:57.447 10:26:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.447 10:26:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.707 ************************************ 00:05:57.707 START TEST accel_decomp_mthread 00:05:57.707 ************************************ 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:57.707 10:26:45 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:57.707 [2024-07-23 10:26:45.991847] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:57.707 [2024-07-23 10:26:45.991909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424076 ] 00:05:57.707 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.707 [2024-07-23 10:26:46.058990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.707 [2024-07-23 10:26:46.102336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.707 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.708 10:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.792 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.792 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.793 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.053 00:05:59.053 real 0m1.318s 00:05:59.053 user 0m1.189s 00:05:59.053 sys 0m0.144s 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.053 10:26:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:59.053 ************************************ 00:05:59.053 END TEST accel_decomp_mthread 00:05:59.053 ************************************ 00:05:59.053 10:26:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:59.053 10:26:47 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:59.053 10:26:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.053 10:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.053 ************************************ 00:05:59.053 START TEST accel_decomp_full_mthread 00:05:59.053 ************************************ 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:59.053 [2024-07-23 10:26:47.388956] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:59.053 [2024-07-23 10:26:47.389011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424287 ] 00:05:59.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.053 [2024-07-23 10:26:47.455534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.053 [2024-07-23 10:26:47.495459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.313 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.313 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:59.313 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.313 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.313 10:26:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.251 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.252 00:06:00.252 real 0m1.316s 00:06:00.252 user 0m1.197s 00:06:00.252 sys 0m0.132s 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.252 10:26:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:00.252 ************************************ 00:06:00.252 END TEST accel_decomp_full_mthread 00:06:00.252 ************************************ 00:06:00.252 10:26:48 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:00.252 10:26:48 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:00.252 10:26:48 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:00.252 10:26:48 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:00.252 10:26:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.252 10:26:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.252 10:26:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.252 10:26:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.252 10:26:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.252 10:26:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.252 10:26:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.252 10:26:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:00.252 10:26:48 accel -- accel/accel.sh@41 -- # jq -r . 00:06:00.512 ************************************ 00:06:00.512 START TEST accel_dif_functional_tests 00:06:00.512 ************************************ 00:06:00.512 10:26:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:00.512 [2024-07-23 10:26:48.778128] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:00.512 [2024-07-23 10:26:48.778187] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424488 ] 00:06:00.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.512 [2024-07-23 10:26:48.844517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.512 [2024-07-23 10:26:48.889974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.512 [2024-07-23 10:26:48.890061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.512 [2024-07-23 10:26:48.890063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.512 00:06:00.512 00:06:00.512 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.512 http://cunit.sourceforge.net/ 00:06:00.512 00:06:00.512 00:06:00.512 Suite: accel_dif 00:06:00.512 Test: verify: DIF generated, GUARD check ...passed 00:06:00.512 Test: verify: DIF generated, APPTAG check ...passed 00:06:00.512 Test: verify: DIF generated, REFTAG check ...passed 00:06:00.512 Test: verify: DIF not generated, GUARD check ...[2024-07-23 10:26:48.963100] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:00.512 passed 00:06:00.512 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 10:26:48.963159] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:00.512 passed 00:06:00.512 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 10:26:48.963202] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:00.512 passed 00:06:00.512 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:00.512 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 10:26:48.963253] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:00.512 passed 00:06:00.512 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:00.512 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:00.512 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:00.512 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 10:26:48.963356] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:00.512 passed 00:06:00.512 Test: verify copy: DIF generated, GUARD check ...passed 00:06:00.512 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:00.512 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:00.512 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 10:26:48.963475] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:00.512 passed 00:06:00.512 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 10:26:48.963503] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:00.512 passed 00:06:00.512 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 10:26:48.963530] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:00.512 passed 00:06:00.512 Test: generate copy: DIF generated, GUARD check ...passed 00:06:00.512 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:00.512 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:00.512 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:00.512 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:00.512 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:00.512 Test: generate copy: iovecs-len validate ...[2024-07-23 10:26:48.963705] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:00.512 passed 00:06:00.512 Test: generate copy: buffer alignment validate ...passed 00:06:00.512 00:06:00.512 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.512 suites 1 1 n/a 0 0 00:06:00.512 tests 26 26 26 0 0 00:06:00.512 asserts 115 115 115 0 n/a 00:06:00.512 00:06:00.512 Elapsed time = 0.000 seconds 00:06:00.772 00:06:00.772 real 0m0.372s 00:06:00.772 user 0m0.594s 00:06:00.772 sys 0m0.169s 00:06:00.772 10:26:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.772 10:26:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:00.772 ************************************ 00:06:00.772 END TEST accel_dif_functional_tests 00:06:00.772 ************************************ 00:06:00.772 00:06:00.772 real 0m30.380s 00:06:00.772 user 0m33.160s 00:06:00.772 sys 0m5.256s 00:06:00.772 10:26:49 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.772 10:26:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.772 ************************************ 00:06:00.772 END TEST accel 00:06:00.772 ************************************ 00:06:00.772 10:26:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:00.772 10:26:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.772 10:26:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.772 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.772 ************************************ 00:06:00.772 START TEST accel_rpc 00:06:00.772 ************************************ 00:06:00.772 10:26:49 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:01.031 * Looking for test storage... 00:06:01.031 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:01.031 10:26:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.031 10:26:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3424592 00:06:01.031 10:26:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3424592 00:06:01.031 10:26:49 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3424592 ']' 00:06:01.031 10:26:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:01.031 10:26:49 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.031 10:26:49 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.031 10:26:49 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.031 10:26:49 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.032 10:26:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 [2024-07-23 10:26:49.378963] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:01.032 [2024-07-23 10:26:49.379041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424592 ] 00:06:01.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.032 [2024-07-23 10:26:49.449631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.032 [2024-07-23 10:26:49.495414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.291 10:26:49 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.291 10:26:49 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:01.291 10:26:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:01.291 10:26:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:01.291 10:26:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:01.291 10:26:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:01.291 10:26:49 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:01.291 10:26:49 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.291 10:26:49 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.291 10:26:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.291 ************************************ 00:06:01.291 START TEST accel_assign_opcode 00:06:01.291 ************************************ 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.291 [2024-07-23 10:26:49.584167] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.291 [2024-07-23 10:26:49.592179] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:01.291 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.551 software 00:06:01.551 00:06:01.551 real 0m0.243s 00:06:01.551 user 0m0.040s 00:06:01.551 sys 0m0.011s 00:06:01.551 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.551 10:26:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.551 ************************************ 00:06:01.551 END TEST accel_assign_opcode 00:06:01.551 ************************************ 00:06:01.551 10:26:49 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3424592 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3424592 ']' 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3424592 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3424592 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3424592' 00:06:01.551 killing process with pid 3424592 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@965 -- # kill 3424592 00:06:01.551 10:26:49 accel_rpc -- common/autotest_common.sh@970 -- # wait 3424592 00:06:01.811 00:06:01.811 real 0m0.961s 00:06:01.811 user 0m0.837s 00:06:01.811 sys 0m0.468s 00:06:01.811 10:26:50 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.811 10:26:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.811 ************************************ 00:06:01.811 END TEST accel_rpc 00:06:01.811 ************************************ 00:06:01.811 10:26:50 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:01.811 10:26:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.811 10:26:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.811 10:26:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.811 ************************************ 00:06:01.811 START TEST app_cmdline 00:06:01.811 ************************************ 00:06:01.811 10:26:50 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:02.071 * Looking for test storage... 00:06:02.071 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:02.071 10:26:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:02.071 10:26:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3424805 00:06:02.071 10:26:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3424805 00:06:02.071 10:26:50 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3424805 ']' 00:06:02.071 10:26:50 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.071 10:26:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:02.071 10:26:50 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.071 10:26:50 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.071 10:26:50 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.071 10:26:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.071 [2024-07-23 10:26:50.424585] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:02.071 [2024-07-23 10:26:50.424648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424805 ] 00:06:02.071 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.071 [2024-07-23 10:26:50.493863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.071 [2024-07-23 10:26:50.537490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.330 10:26:50 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.330 10:26:50 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:02.330 10:26:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:02.590 { 00:06:02.590 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:06:02.590 "fields": { 00:06:02.590 "major": 24, 00:06:02.590 "minor": 5, 00:06:02.590 "patch": 1, 00:06:02.590 "suffix": "-pre", 00:06:02.590 "commit": "241d0f3c9" 00:06:02.590 } 00:06:02.590 } 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.590 10:26:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:02.590 10:26:50 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.850 request: 00:06:02.850 { 00:06:02.850 "method": "env_dpdk_get_mem_stats", 00:06:02.850 "req_id": 1 00:06:02.850 } 00:06:02.850 Got JSON-RPC error response 00:06:02.850 response: 00:06:02.850 { 00:06:02.850 "code": -32601, 00:06:02.850 "message": "Method not found" 00:06:02.850 } 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.850 10:26:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3424805 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3424805 ']' 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3424805 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3424805 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3424805' 00:06:02.850 killing process with pid 3424805 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@965 -- # kill 3424805 00:06:02.850 10:26:51 app_cmdline -- common/autotest_common.sh@970 -- # wait 3424805 00:06:03.110 00:06:03.110 real 0m1.183s 00:06:03.110 user 0m1.299s 00:06:03.110 sys 0m0.480s 00:06:03.110 10:26:51 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.110 10:26:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.110 ************************************ 00:06:03.110 END TEST app_cmdline 00:06:03.110 ************************************ 00:06:03.110 10:26:51 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:03.110 10:26:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.110 10:26:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.110 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:03.110 ************************************ 00:06:03.110 START TEST version 00:06:03.110 ************************************ 00:06:03.110 10:26:51 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:03.370 * Looking for test storage... 00:06:03.370 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:03.370 10:26:51 version -- app/version.sh@17 -- # get_header_version major 00:06:03.370 10:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.370 10:26:51 version -- app/version.sh@17 -- # major=24 00:06:03.370 10:26:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:03.370 10:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.370 10:26:51 version -- app/version.sh@18 -- # minor=5 00:06:03.370 10:26:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:03.370 10:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.370 10:26:51 version -- app/version.sh@19 -- # patch=1 00:06:03.370 10:26:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:03.370 10:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.370 10:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:03.370 10:26:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:03.370 10:26:51 version -- app/version.sh@22 -- # version=24.5 00:06:03.370 10:26:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:03.370 10:26:51 version -- app/version.sh@25 -- # version=24.5.1 00:06:03.370 10:26:51 version -- app/version.sh@28 -- # version=24.5.1rc0 00:06:03.370 10:26:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:03.370 10:26:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:03.370 10:26:51 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:06:03.370 10:26:51 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:06:03.370 00:06:03.370 real 0m0.182s 00:06:03.370 user 0m0.096s 00:06:03.370 sys 0m0.130s 00:06:03.370 10:26:51 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.370 10:26:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:03.370 ************************************ 00:06:03.370 END TEST version 00:06:03.370 ************************************ 00:06:03.371 10:26:51 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@198 -- # uname -s 00:06:03.371 10:26:51 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:03.371 10:26:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:03.371 10:26:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:03.371 10:26:51 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:03.371 10:26:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.371 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:03.371 10:26:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:03.371 10:26:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:03.371 10:26:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:03.371 10:26:51 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:03.371 10:26:51 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:03.371 10:26:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.371 10:26:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.371 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:03.631 ************************************ 00:06:03.631 START TEST llvm_fuzz 00:06:03.631 ************************************ 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:03.631 * Looking for test storage... 00:06:03.631 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:03.631 10:26:51 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.631 10:26:51 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:03.631 ************************************ 00:06:03.631 START TEST nvmf_fuzz 00:06:03.631 ************************************ 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:03.631 * Looking for test storage... 00:06:03.631 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:03.631 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:03.632 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:03.894 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:03.894 #define SPDK_CONFIG_H 00:06:03.894 #define SPDK_CONFIG_APPS 1 00:06:03.894 #define SPDK_CONFIG_ARCH native 00:06:03.894 #undef SPDK_CONFIG_ASAN 00:06:03.894 #undef SPDK_CONFIG_AVAHI 00:06:03.894 #undef SPDK_CONFIG_CET 00:06:03.894 #define SPDK_CONFIG_COVERAGE 1 00:06:03.894 #define SPDK_CONFIG_CROSS_PREFIX 00:06:03.894 #undef SPDK_CONFIG_CRYPTO 00:06:03.894 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:03.894 #undef SPDK_CONFIG_CUSTOMOCF 00:06:03.894 #undef SPDK_CONFIG_DAOS 00:06:03.894 #define SPDK_CONFIG_DAOS_DIR 00:06:03.894 #define SPDK_CONFIG_DEBUG 1 00:06:03.894 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:03.894 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:06:03.894 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:06:03.894 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:06:03.894 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:03.894 #undef SPDK_CONFIG_DPDK_UADK 00:06:03.894 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:03.894 #define SPDK_CONFIG_EXAMPLES 1 00:06:03.894 #undef SPDK_CONFIG_FC 00:06:03.894 #define SPDK_CONFIG_FC_PATH 00:06:03.894 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:03.894 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:03.894 #undef SPDK_CONFIG_FUSE 00:06:03.894 #define SPDK_CONFIG_FUZZER 1 00:06:03.894 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:03.894 #undef SPDK_CONFIG_GOLANG 00:06:03.894 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:03.894 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:03.894 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:03.894 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:03.894 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:03.894 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:03.894 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:03.894 #define SPDK_CONFIG_IDXD 1 00:06:03.894 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:03.894 #undef SPDK_CONFIG_IPSEC_MB 00:06:03.894 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:03.894 #define SPDK_CONFIG_ISAL 1 00:06:03.894 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:03.894 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:03.894 #define SPDK_CONFIG_LIBDIR 00:06:03.894 #undef SPDK_CONFIG_LTO 00:06:03.894 #define SPDK_CONFIG_MAX_LCORES 00:06:03.894 #define SPDK_CONFIG_NVME_CUSE 1 00:06:03.894 #undef SPDK_CONFIG_OCF 00:06:03.894 #define SPDK_CONFIG_OCF_PATH 00:06:03.894 #define SPDK_CONFIG_OPENSSL_PATH 00:06:03.894 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:03.894 #define SPDK_CONFIG_PGO_DIR 00:06:03.894 #undef SPDK_CONFIG_PGO_USE 00:06:03.894 #define SPDK_CONFIG_PREFIX /usr/local 00:06:03.894 #undef SPDK_CONFIG_RAID5F 00:06:03.894 #undef SPDK_CONFIG_RBD 00:06:03.894 #define SPDK_CONFIG_RDMA 1 00:06:03.894 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:03.894 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:03.894 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:03.894 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:03.894 #undef SPDK_CONFIG_SHARED 00:06:03.894 #undef SPDK_CONFIG_SMA 00:06:03.894 #define SPDK_CONFIG_TESTS 1 00:06:03.894 #undef SPDK_CONFIG_TSAN 00:06:03.895 #define SPDK_CONFIG_UBLK 1 00:06:03.895 #define SPDK_CONFIG_UBSAN 1 00:06:03.895 #undef SPDK_CONFIG_UNIT_TESTS 00:06:03.895 #undef SPDK_CONFIG_URING 00:06:03.895 #define SPDK_CONFIG_URING_PATH 00:06:03.895 #undef SPDK_CONFIG_URING_ZNS 00:06:03.895 #undef SPDK_CONFIG_USDT 00:06:03.895 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:03.895 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:03.895 #define SPDK_CONFIG_VFIO_USER 1 00:06:03.895 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:03.895 #define SPDK_CONFIG_VHOST 1 00:06:03.895 #define SPDK_CONFIG_VIRTIO 1 00:06:03.895 #undef SPDK_CONFIG_VTUNE 00:06:03.895 #define SPDK_CONFIG_VTUNE_DIR 00:06:03.895 #define SPDK_CONFIG_WERROR 1 00:06:03.895 #define SPDK_CONFIG_WPDK_DIR 00:06:03.895 #undef SPDK_CONFIG_XNVME 00:06:03.895 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@57 -- # : 1 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@61 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # : 1 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # : 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # : 1 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # : 1 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # : 0 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:03.895 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # : 1 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # : true 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # : 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@166 -- # : 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # : 0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # cat 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:03.896 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j72 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3425154 ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # kill -0 3425154 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.XkuMrs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.XkuMrs/tests/nvmf /tmp/spdk.XkuMrs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # df -T 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=945618944 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4338810880 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=48919359488 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742542848 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=12823183360 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866558976 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871269376 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342702080 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348510208 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5808128 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870736896 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871273472 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=536576 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174248960 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174253056 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:03.897 * Looking for test storage... 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@373 -- # target_space=48919359488 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # new_size=15037775872 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.897 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # return 0 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # true 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:03.897 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:03.898 10:26:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:03.898 [2024-07-23 10:26:52.324876] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:03.898 [2024-07-23 10:26:52.324946] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425318 ] 00:06:03.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.157 [2024-07-23 10:26:52.606761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.157 [2024-07-23 10:26:52.640401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.417 [2024-07-23 10:26:52.693658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.417 [2024-07-23 10:26:52.709994] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:04.417 INFO: Running with entropic power schedule (0xFF, 100). 00:06:04.417 INFO: Seed: 3493375341 00:06:04.417 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:04.417 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:04.417 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:04.417 INFO: A corpus is not provided, starting from an empty corpus 00:06:04.417 #2 INITED exec/s: 0 rss: 64Mb 00:06:04.417 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:04.417 This may also happen if the target rejected all inputs we tried so far 00:06:04.417 [2024-07-23 10:26:52.765236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.417 [2024-07-23 10:26:52.765268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.676 NEW_FUNC[1/692]: 0x4939b0 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:04.676 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:04.676 #41 NEW cov: 11798 ft: 11798 corp: 2/82b lim: 320 exec/s: 0 rss: 71Mb L: 81/81 MS: 4 InsertByte-InsertByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:06:04.676 [2024-07-23 10:26:53.086062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.676 [2024-07-23 10:26:53.086112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.676 #42 NEW cov: 11928 ft: 12390 corp: 3/199b lim: 320 exec/s: 0 rss: 71Mb L: 117/117 MS: 1 InsertRepeatedBytes- 00:06:04.676 [2024-07-23 10:26:53.136033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.676 [2024-07-23 10:26:53.136065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.676 #47 NEW cov: 11936 ft: 13034 corp: 4/286b lim: 320 exec/s: 0 rss: 71Mb L: 87/117 MS: 5 CopyPart-ChangeByte-InsertByte-InsertByte-InsertRepeatedBytes- 00:06:04.676 [2024-07-23 10:26:53.176235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:29000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.676 [2024-07-23 10:26:53.176261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.936 #48 NEW cov: 12021 ft: 13268 corp: 5/404b lim: 320 exec/s: 0 rss: 71Mb L: 118/118 MS: 1 InsertByte- 00:06:04.936 [2024-07-23 10:26:53.226279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.936 [2024-07-23 10:26:53.226305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.936 #49 NEW cov: 12021 ft: 13338 corp: 6/492b lim: 320 exec/s: 0 rss: 72Mb L: 88/118 MS: 1 InsertByte- 00:06:04.936 [2024-07-23 10:26:53.276429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.936 [2024-07-23 10:26:53.276454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.936 #50 NEW cov: 12021 ft: 13462 corp: 7/579b lim: 320 exec/s: 0 rss: 72Mb L: 87/118 MS: 1 ShuffleBytes- 00:06:04.936 [2024-07-23 10:26:53.316580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.936 [2024-07-23 10:26:53.316607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.936 #51 NEW cov: 12021 ft: 13506 corp: 8/698b lim: 320 exec/s: 0 rss: 72Mb L: 119/119 MS: 1 InsertByte- 00:06:04.936 [2024-07-23 10:26:53.367011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.937 [2024-07-23 10:26:53.367036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.937 [2024-07-23 10:26:53.367094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:5 nsid:0 cdw10:4d4d4d00 cdw11:4d4d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.937 [2024-07-23 10:26:53.367108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.937 [2024-07-23 10:26:53.367179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:6 nsid:e2e2e2e2 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xe2e2e2e2e2e2e2e2 00:06:04.937 [2024-07-23 10:26:53.367193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.937 #52 NEW cov: 12022 ft: 14229 corp: 9/894b lim: 320 exec/s: 0 rss: 72Mb L: 196/196 MS: 1 CopyPart- 00:06:04.937 [2024-07-23 10:26:53.406856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:04.937 [2024-07-23 10:26:53.406882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.197 #53 NEW cov: 12022 ft: 14266 corp: 10/982b lim: 320 exec/s: 0 rss: 72Mb L: 88/196 MS: 1 ShuffleBytes- 00:06:05.197 [2024-07-23 10:26:53.457068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:27272727 cdw10:27272727 cdw11:27272727 00:06:05.197 [2024-07-23 10:26:53.457095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.197 [2024-07-23 10:26:53.457162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.197 [2024-07-23 10:26:53.457176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.197 #54 NEW cov: 12022 ft: 14472 corp: 11/1128b lim: 320 exec/s: 0 rss: 72Mb L: 146/196 MS: 1 InsertRepeatedBytes- 00:06:05.197 [2024-07-23 10:26:53.507190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.197 [2024-07-23 10:26:53.507217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.197 [2024-07-23 10:26:53.507269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.197 [2024-07-23 10:26:53.507283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.197 #55 NEW cov: 12022 ft: 14492 corp: 12/1297b lim: 320 exec/s: 0 rss: 72Mb L: 169/196 MS: 1 CrossOver- 00:06:05.197 [2024-07-23 10:26:53.547293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.197 [2024-07-23 10:26:53.547322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.197 [2024-07-23 10:26:53.547373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.197 [2024-07-23 10:26:53.547387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.197 #56 NEW cov: 12022 ft: 14513 corp: 13/1466b lim: 320 exec/s: 0 rss: 72Mb L: 169/196 MS: 1 ShuffleBytes- 00:06:05.197 [2024-07-23 10:26:53.597425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:27272727 cdw10:27272727 cdw11:27272727 00:06:05.197 [2024-07-23 10:26:53.597451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.197 [2024-07-23 10:26:53.597504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.197 [2024-07-23 10:26:53.597517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.197 #57 NEW cov: 12022 ft: 14529 corp: 14/1612b lim: 320 exec/s: 0 rss: 72Mb L: 146/196 MS: 1 CMP- DE: "\004\000"- 00:06:05.197 [2024-07-23 10:26:53.647824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.197 [2024-07-23 10:26:53.647851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.197 [2024-07-23 10:26:53.647911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:5 nsid:0 cdw10:4d4d4d00 cdw11:4d4d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.197 [2024-07-23 10:26:53.647926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.197 [2024-07-23 10:26:53.647985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: OCSSD / GEOMETRY (e2) qid:0 cid:6 nsid:e2e2e2e2 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xe2e2e2e2e2e2e2e2 00:06:05.197 [2024-07-23 10:26:53.647999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.197 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:05.197 #58 NEW cov: 12045 ft: 14598 corp: 15/1808b lim: 320 exec/s: 0 rss: 72Mb L: 196/196 MS: 1 ChangeBit- 00:06:05.197 [2024-07-23 10:26:53.697812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:4d4d4d4d cdw11:4d4d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.197 [2024-07-23 10:26:53.697840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.457 [2024-07-23 10:26:53.697893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:4d4d4d4d cdw11:4d4d4d4d 00:06:05.457 [2024-07-23 10:26:53.697908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.457 #59 NEW cov: 12045 ft: 14641 corp: 16/1970b lim: 320 exec/s: 0 rss: 72Mb L: 162/196 MS: 1 CopyPart- 00:06:05.457 [2024-07-23 10:26:53.747793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.457 [2024-07-23 10:26:53.747821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.457 #60 NEW cov: 12045 ft: 14642 corp: 17/2087b lim: 320 exec/s: 60 rss: 72Mb L: 117/196 MS: 1 ChangeBit- 00:06:05.457 [2024-07-23 10:26:53.787972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.457 [2024-07-23 10:26:53.787997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.457 [2024-07-23 10:26:53.788050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.457 [2024-07-23 10:26:53.788064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.457 #61 NEW cov: 12045 ft: 14661 corp: 18/2257b lim: 320 exec/s: 61 rss: 72Mb L: 170/196 MS: 1 InsertByte- 00:06:05.457 [2024-07-23 10:26:53.837988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:2a000000 cdw10:00000000 cdw11:00000000 00:06:05.457 [2024-07-23 10:26:53.838013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.457 #62 NEW cov: 12045 ft: 14689 corp: 19/2345b lim: 320 exec/s: 62 rss: 72Mb L: 88/196 MS: 1 InsertByte- 00:06:05.457 [2024-07-23 10:26:53.878196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.457 [2024-07-23 10:26:53.878222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.457 [2024-07-23 10:26:53.878273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:4d2e2900 cdw11:4d4d4d4d 00:06:05.457 [2024-07-23 10:26:53.878287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.457 #68 NEW cov: 12045 ft: 14732 corp: 20/2528b lim: 320 exec/s: 68 rss: 73Mb L: 183/196 MS: 1 InsertRepeatedBytes- 00:06:05.457 [2024-07-23 10:26:53.928271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.457 [2024-07-23 10:26:53.928296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.457 #69 NEW cov: 12045 ft: 14762 corp: 21/2618b lim: 320 exec/s: 69 rss: 73Mb L: 90/196 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:05.716 [2024-07-23 10:26:53.968422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:29000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.716 [2024-07-23 10:26:53.968446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.717 #70 NEW cov: 12045 ft: 14836 corp: 22/2738b lim: 320 exec/s: 70 rss: 73Mb L: 120/196 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:05.717 [2024-07-23 10:26:54.008550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.717 [2024-07-23 10:26:54.008575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.717 #71 NEW cov: 12045 ft: 14849 corp: 23/2857b lim: 320 exec/s: 71 rss: 73Mb L: 119/196 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:05.717 [2024-07-23 10:26:54.048624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:29000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.717 [2024-07-23 10:26:54.048649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.717 #72 NEW cov: 12045 ft: 14886 corp: 24/2977b lim: 320 exec/s: 72 rss: 73Mb L: 120/196 MS: 1 CopyPart- 00:06:05.717 [2024-07-23 10:26:54.098824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.717 [2024-07-23 10:26:54.098849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.717 #73 NEW cov: 12045 ft: 14898 corp: 25/3095b lim: 320 exec/s: 73 rss: 73Mb L: 118/196 MS: 1 EraseBytes- 00:06:05.717 [2024-07-23 10:26:54.138996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.717 [2024-07-23 10:26:54.139026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.717 [2024-07-23 10:26:54.139084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4d4dffffffffffff 00:06:05.717 [2024-07-23 10:26:54.139098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.717 NEW_FUNC[1/1]: 0x139a840 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2038 00:06:05.717 #74 NEW cov: 12076 ft: 15004 corp: 26/3260b lim: 320 exec/s: 74 rss: 73Mb L: 165/196 MS: 1 InsertRepeatedBytes- 00:06:05.717 [2024-07-23 10:26:54.178967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.717 [2024-07-23 10:26:54.178993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.717 #75 NEW cov: 12076 ft: 15020 corp: 27/3380b lim: 320 exec/s: 75 rss: 73Mb L: 120/196 MS: 1 EraseBytes- 00:06:05.976 [2024-07-23 10:26:54.219094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00040000 cdw11:00000000 00:06:05.976 [2024-07-23 10:26:54.219120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.976 #76 NEW cov: 12076 ft: 15044 corp: 28/3469b lim: 320 exec/s: 76 rss: 73Mb L: 89/196 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:05.976 [2024-07-23 10:26:54.259567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.977 [2024-07-23 10:26:54.259592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.259651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:4d4d4d4d cdw11:4d4d4d4d SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:05.977 [2024-07-23 10:26:54.259665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.259714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.977 [2024-07-23 10:26:54.259727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.259781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:4de2e2e2 cdw11:4d4d4d4d 00:06:05.977 [2024-07-23 10:26:54.259795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.977 #77 NEW cov: 12076 ft: 15217 corp: 29/3760b lim: 320 exec/s: 77 rss: 73Mb L: 291/291 MS: 1 InsertRepeatedBytes- 00:06:05.977 [2024-07-23 10:26:54.309734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.977 [2024-07-23 10:26:54.309759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.309824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:4d4d4d4d cdw11:4d4d4d4d SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:05.977 [2024-07-23 10:26:54.309839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.309891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:4 cdw10:00000000 cdw11:00000000 00:06:05.977 [2024-07-23 10:26:54.309904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.309957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:4d4d4d4d 00:06:05.977 [2024-07-23 10:26:54.309970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.977 #78 NEW cov: 12076 ft: 15256 corp: 30/4052b lim: 320 exec/s: 78 rss: 73Mb L: 292/292 MS: 1 InsertByte- 00:06:05.977 [2024-07-23 10:26:54.359497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:2a000000 cdw10:00000000 cdw11:00000000 00:06:05.977 [2024-07-23 10:26:54.359522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.977 #79 NEW cov: 12076 ft: 15269 corp: 31/4141b lim: 320 exec/s: 79 rss: 73Mb L: 89/292 MS: 1 InsertByte- 00:06:05.977 [2024-07-23 10:26:54.410011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.977 [2024-07-23 10:26:54.410037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.410098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffff0000ffff 00:06:05.977 [2024-07-23 10:26:54.410111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.410162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:4d4d4d4d cdw11:0000004d 00:06:05.977 [2024-07-23 10:26:54.410175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.410226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 00:06:05.977 [2024-07-23 10:26:54.410239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.977 #80 NEW cov: 12076 ft: 15274 corp: 32/4445b lim: 320 exec/s: 80 rss: 73Mb L: 304/304 MS: 1 CrossOver- 00:06:05.977 [2024-07-23 10:26:54.459931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.977 [2024-07-23 10:26:54.459956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.977 [2024-07-23 10:26:54.460006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 00:06:05.977 [2024-07-23 10:26:54.460020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.236 #81 NEW cov: 12076 ft: 15282 corp: 33/4585b lim: 320 exec/s: 81 rss: 73Mb L: 140/304 MS: 1 InsertRepeatedBytes- 00:06:06.236 [2024-07-23 10:26:54.500002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d00 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.236 [2024-07-23 10:26:54.500030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.236 [2024-07-23 10:26:54.500107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4d4dffffffffffff 00:06:06.236 [2024-07-23 10:26:54.500123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.236 #82 NEW cov: 12076 ft: 15289 corp: 34/4750b lim: 320 exec/s: 82 rss: 73Mb L: 165/304 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:06.236 [2024-07-23 10:26:54.550127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.236 [2024-07-23 10:26:54.550158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.236 [2024-07-23 10:26:54.550209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:4d2e2900 cdw11:4d4d4d4d 00:06:06.236 [2024-07-23 10:26:54.550223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.236 #83 NEW cov: 12076 ft: 15297 corp: 35/4935b lim: 320 exec/s: 83 rss: 73Mb L: 185/304 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:06.236 [2024-07-23 10:26:54.600212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000004 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.236 [2024-07-23 10:26:54.600237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.236 #84 NEW cov: 12076 ft: 15301 corp: 36/5054b lim: 320 exec/s: 84 rss: 73Mb L: 119/304 MS: 1 ChangeByte- 00:06:06.236 [2024-07-23 10:26:54.650311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.236 [2024-07-23 10:26:54.650337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.236 #85 NEW cov: 12076 ft: 15306 corp: 37/5143b lim: 320 exec/s: 85 rss: 73Mb L: 89/304 MS: 1 PersAutoDict- DE: "\004\000"- 00:06:06.236 [2024-07-23 10:26:54.690457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4d) qid:0 cid:4 nsid:4d4d4d4d cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.236 [2024-07-23 10:26:54.690483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.236 #86 NEW cov: 12076 ft: 15351 corp: 38/5260b lim: 320 exec/s: 86 rss: 74Mb L: 117/304 MS: 1 CopyPart- 00:06:06.496 [2024-07-23 10:26:54.740699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.496 [2024-07-23 10:26:54.740725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.496 [2024-07-23 10:26:54.740785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.496 [2024-07-23 10:26:54.740799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.496 #87 NEW cov: 12076 ft: 15378 corp: 39/5422b lim: 320 exec/s: 43 rss: 74Mb L: 162/304 MS: 1 EraseBytes- 00:06:06.496 #87 DONE cov: 12076 ft: 15378 corp: 39/5422b lim: 320 exec/s: 43 rss: 74Mb 00:06:06.496 ###### Recommended dictionary. ###### 00:06:06.496 "\004\000" # Uses: 8 00:06:06.496 ###### End of recommended dictionary. ###### 00:06:06.496 Done 87 runs in 2 second(s) 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:06.496 10:26:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:06.496 [2024-07-23 10:26:54.939153] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:06.496 [2024-07-23 10:26:54.939236] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425630 ] 00:06:06.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.756 [2024-07-23 10:26:55.215182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.756 [2024-07-23 10:26:55.247090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.015 [2024-07-23 10:26:55.299801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.015 [2024-07-23 10:26:55.316120] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:07.015 INFO: Running with entropic power schedule (0xFF, 100). 00:06:07.015 INFO: Seed: 1805424773 00:06:07.015 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:07.015 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:07.015 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:07.015 INFO: A corpus is not provided, starting from an empty corpus 00:06:07.015 #2 INITED exec/s: 0 rss: 64Mb 00:06:07.015 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:07.015 This may also happen if the target rejected all inputs we tried so far 00:06:07.015 [2024-07-23 10:26:55.371236] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.015 [2024-07-23 10:26:55.371365] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:07.015 [2024-07-23 10:26:55.371585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.015 [2024-07-23 10:26:55.371616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.015 [2024-07-23 10:26:55.371677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.015 [2024-07-23 10:26:55.371692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.275 NEW_FUNC[1/692]: 0x4942b0 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:07.275 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:07.275 #15 NEW cov: 11885 ft: 11858 corp: 2/13b lim: 30 exec/s: 0 rss: 70Mb L: 12/12 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:06:07.275 [2024-07-23 10:26:55.701985] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.275 [2024-07-23 10:26:55.702137] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:07.275 [2024-07-23 10:26:55.702350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.275 [2024-07-23 10:26:55.702391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.275 [2024-07-23 10:26:55.702449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.275 [2024-07-23 10:26:55.702463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.275 #16 NEW cov: 12017 ft: 12335 corp: 3/26b lim: 30 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 InsertByte- 00:06:07.275 [2024-07-23 10:26:55.752038] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.275 [2024-07-23 10:26:55.752155] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:07.275 [2024-07-23 10:26:55.752361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.275 [2024-07-23 10:26:55.752387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.275 [2024-07-23 10:26:55.752440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.275 [2024-07-23 10:26:55.752454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.275 #17 NEW cov: 12023 ft: 12660 corp: 4/38b lim: 30 exec/s: 0 rss: 70Mb L: 12/13 MS: 1 ChangeByte- 00:06:07.535 [2024-07-23 10:26:55.792135] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.535 [2024-07-23 10:26:55.792252] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:07.535 [2024-07-23 10:26:55.792452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.792483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.535 [2024-07-23 10:26:55.792536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.792550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.535 #18 NEW cov: 12108 ft: 13043 corp: 5/50b lim: 30 exec/s: 0 rss: 70Mb L: 12/13 MS: 1 ShuffleBytes- 00:06:07.535 [2024-07-23 10:26:55.842289] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.535 [2024-07-23 10:26:55.842404] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047748) > buf size (4096) 00:06:07.535 [2024-07-23 10:26:55.842619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.842644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.535 [2024-07-23 10:26:55.842700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff3083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.842715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.535 #19 NEW cov: 12108 ft: 13127 corp: 6/62b lim: 30 exec/s: 0 rss: 70Mb L: 12/13 MS: 1 ChangeByte- 00:06:07.535 [2024-07-23 10:26:55.882412] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.535 [2024-07-23 10:26:55.882529] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:07.535 [2024-07-23 10:26:55.882724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.882754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.535 [2024-07-23 10:26:55.882811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.882825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.535 #20 NEW cov: 12108 ft: 13186 corp: 7/74b lim: 30 exec/s: 0 rss: 70Mb L: 12/13 MS: 1 CopyPart- 00:06:07.535 [2024-07-23 10:26:55.922519] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff1e 00:06:07.535 [2024-07-23 10:26:55.922632] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:07.535 [2024-07-23 10:26:55.922844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.922870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.535 [2024-07-23 10:26:55.922925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.922939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.535 #21 NEW cov: 12108 ft: 13229 corp: 8/86b lim: 30 exec/s: 0 rss: 71Mb L: 12/13 MS: 1 CMP- DE: "\377\377\377\036"- 00:06:07.535 [2024-07-23 10:26:55.972635] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.535 [2024-07-23 10:26:55.972749] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:07.535 [2024-07-23 10:26:55.972977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ef cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.973003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.535 [2024-07-23 10:26:55.973056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:55.973070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.535 #27 NEW cov: 12108 ft: 13246 corp: 9/98b lim: 30 exec/s: 0 rss: 71Mb L: 12/13 MS: 1 ChangeBit- 00:06:07.535 [2024-07-23 10:26:56.012772] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.535 [2024-07-23 10:26:56.012893] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a7a 00:06:07.535 [2024-07-23 10:26:56.013109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:56.013134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.535 [2024-07-23 10:26:56.013187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.535 [2024-07-23 10:26:56.013202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.535 #28 NEW cov: 12108 ft: 13285 corp: 10/111b lim: 30 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 CopyPart- 00:06:07.794 [2024-07-23 10:26:56.052883] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.052997] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:07.795 [2024-07-23 10:26:56.053214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.053243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.053297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.053311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.795 #29 NEW cov: 12108 ft: 13407 corp: 11/123b lim: 30 exec/s: 0 rss: 71Mb L: 12/13 MS: 1 ChangeByte- 00:06:07.795 [2024-07-23 10:26:56.093032] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.093148] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.093256] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.093362] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047748) > buf size (4096) 00:06:07.795 [2024-07-23 10:26:56.093574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.093600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.093654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.093668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.093723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.093737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.093794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff3083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.093807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.795 #30 NEW cov: 12108 ft: 14046 corp: 12/147b lim: 30 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:06:07.795 [2024-07-23 10:26:56.143150] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001e7a 00:06:07.795 [2024-07-23 10:26:56.143284] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.143493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.143519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.143572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.143586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.795 #31 NEW cov: 12108 ft: 14075 corp: 13/163b lim: 30 exec/s: 0 rss: 71Mb L: 16/24 MS: 1 PersAutoDict- DE: "\377\377\377\036"- 00:06:07.795 [2024-07-23 10:26:56.183228] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (798720) > buf size (4096) 00:06:07.795 [2024-07-23 10:26:56.183445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.183472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.795 #34 NEW cov: 12108 ft: 14505 corp: 14/169b lim: 30 exec/s: 0 rss: 71Mb L: 6/24 MS: 3 ChangeBinInt-PersAutoDict-InsertByte- DE: "\377\377\377\036"- 00:06:07.795 [2024-07-23 10:26:56.223395] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.223515] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:07.795 [2024-07-23 10:26:56.223730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ef cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.223757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.223817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.223832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.795 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:07.795 #35 NEW cov: 12131 ft: 14553 corp: 15/185b lim: 30 exec/s: 0 rss: 72Mb L: 16/24 MS: 1 InsertRepeatedBytes- 00:06:07.795 [2024-07-23 10:26:56.273598] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.273714] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.273828] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:07.795 [2024-07-23 10:26:56.273931] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047748) > buf size (4096) 00:06:07.795 [2024-07-23 10:26:56.274139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.274165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.274219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff4c83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.274232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.274284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.274297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.795 [2024-07-23 10:26:56.274348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff3083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.795 [2024-07-23 10:26:56.274362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.055 #36 NEW cov: 12131 ft: 14618 corp: 16/209b lim: 30 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeByte- 00:06:08.055 [2024-07-23 10:26:56.323656] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ff1e 00:06:08.055 [2024-07-23 10:26:56.323772] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:08.055 [2024-07-23 10:26:56.323988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.324014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.055 [2024-07-23 10:26:56.324070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.324087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.055 #37 NEW cov: 12131 ft: 14677 corp: 17/221b lim: 30 exec/s: 37 rss: 72Mb L: 12/24 MS: 1 ShuffleBytes- 00:06:08.055 [2024-07-23 10:26:56.373787] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.055 [2024-07-23 10:26:56.373903] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:08.055 [2024-07-23 10:26:56.374099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.374125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.055 [2024-07-23 10:26:56.374180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.374195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.055 #38 NEW cov: 12131 ft: 14690 corp: 18/234b lim: 30 exec/s: 38 rss: 72Mb L: 13/24 MS: 1 ChangeBit- 00:06:08.055 [2024-07-23 10:26:56.423929] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (42040) > buf size (4096) 00:06:08.055 [2024-07-23 10:26:56.424041] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:08.055 [2024-07-23 10:26:56.424247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:290d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.424273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.055 [2024-07-23 10:26:56.424330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.424344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.055 #39 NEW cov: 12131 ft: 14706 corp: 19/247b lim: 30 exec/s: 39 rss: 72Mb L: 13/24 MS: 1 ChangeBinInt- 00:06:08.055 [2024-07-23 10:26:56.474087] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.055 [2024-07-23 10:26:56.474203] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.055 [2024-07-23 10:26:56.474427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.474453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.055 [2024-07-23 10:26:56.474510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff831e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.474524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.055 #40 NEW cov: 12131 ft: 14715 corp: 20/263b lim: 30 exec/s: 40 rss: 72Mb L: 16/24 MS: 1 PersAutoDict- DE: "\377\377\377\036"- 00:06:08.055 [2024-07-23 10:26:56.524188] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (42040) > buf size (4096) 00:06:08.055 [2024-07-23 10:26:56.524306] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:06:08.055 [2024-07-23 10:26:56.524508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:290d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.524534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.055 [2024-07-23 10:26:56.524591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.055 [2024-07-23 10:26:56.524609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.315 #41 NEW cov: 12131 ft: 14751 corp: 21/276b lim: 30 exec/s: 41 rss: 72Mb L: 13/24 MS: 1 CMP- DE: "\015\000\000\000"- 00:06:08.315 [2024-07-23 10:26:56.574350] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.315 [2024-07-23 10:26:56.574465] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.315 [2024-07-23 10:26:56.574684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.574710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.315 [2024-07-23 10:26:56.574767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.574786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.315 #42 NEW cov: 12131 ft: 14766 corp: 22/289b lim: 30 exec/s: 42 rss: 72Mb L: 13/24 MS: 1 CopyPart- 00:06:08.315 [2024-07-23 10:26:56.614533] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.315 [2024-07-23 10:26:56.614645] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fcff 00:06:08.315 [2024-07-23 10:26:56.614749] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.315 [2024-07-23 10:26:56.614861] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:08.315 [2024-07-23 10:26:56.615073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.615099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.315 [2024-07-23 10:26:56.615154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff4c83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.615168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.315 [2024-07-23 10:26:56.615224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.615237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.315 [2024-07-23 10:26:56.615292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff8330 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.615305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.315 #43 NEW cov: 12131 ft: 14786 corp: 23/314b lim: 30 exec/s: 43 rss: 72Mb L: 25/25 MS: 1 InsertByte- 00:06:08.315 [2024-07-23 10:26:56.664603] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff51 00:06:08.315 [2024-07-23 10:26:56.664716] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.315 [2024-07-23 10:26:56.664924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.664950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.315 [2024-07-23 10:26:56.665006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.665020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.315 #44 NEW cov: 12131 ft: 14809 corp: 24/327b lim: 30 exec/s: 44 rss: 72Mb L: 13/25 MS: 1 InsertByte- 00:06:08.315 [2024-07-23 10:26:56.714709] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:08.315 [2024-07-23 10:26:56.714926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff830b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.714952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.315 #45 NEW cov: 12131 ft: 14851 corp: 25/333b lim: 30 exec/s: 45 rss: 72Mb L: 6/25 MS: 1 ShuffleBytes- 00:06:08.315 [2024-07-23 10:26:56.764862] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (798720) > buf size (4096) 00:06:08.315 [2024-07-23 10:26:56.765083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0bff831e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.765107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.315 #46 NEW cov: 12131 ft: 14906 corp: 26/339b lim: 30 exec/s: 46 rss: 73Mb L: 6/25 MS: 1 ShuffleBytes- 00:06:08.315 [2024-07-23 10:26:56.804979] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001e7a 00:06:08.315 [2024-07-23 10:26:56.805093] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0d 00:06:08.315 [2024-07-23 10:26:56.805199] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aff 00:06:08.315 [2024-07-23 10:26:56.805407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.805433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.315 [2024-07-23 10:26:56.805489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.805503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.315 [2024-07-23 10:26:56.805558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.315 [2024-07-23 10:26:56.805572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.575 #47 NEW cov: 12131 ft: 15162 corp: 27/359b lim: 30 exec/s: 47 rss: 73Mb L: 20/25 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:06:08.575 [2024-07-23 10:26:56.855189] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100008181 00:06:08.575 [2024-07-23 10:26:56.855305] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100008181 00:06:08.575 [2024-07-23 10:26:56.855410] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100008181 00:06:08.575 [2024-07-23 10:26:56.855514] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100008181 00:06:08.575 [2024-07-23 10:26:56.855727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:81818181 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.855754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.855808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:81818181 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.855824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.855879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:81818181 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.855896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.855951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:81818181 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.855966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.575 #48 NEW cov: 12131 ft: 15170 corp: 28/386b lim: 30 exec/s: 48 rss: 73Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:08.575 [2024-07-23 10:26:56.895230] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.575 [2024-07-23 10:26:56.895350] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047676) > buf size (4096) 00:06:08.575 [2024-07-23 10:26:56.895561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.895588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.895642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff1e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.895657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.575 #49 NEW cov: 12131 ft: 15177 corp: 29/398b lim: 30 exec/s: 49 rss: 73Mb L: 12/27 MS: 1 PersAutoDict- DE: "\377\377\377\036"- 00:06:08.575 [2024-07-23 10:26:56.935367] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.575 [2024-07-23 10:26:56.935487] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047676) > buf size (4096) 00:06:08.575 [2024-07-23 10:26:56.935689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.935715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.935770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff1e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.935790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.575 #50 NEW cov: 12131 ft: 15198 corp: 30/410b lim: 30 exec/s: 50 rss: 73Mb L: 12/27 MS: 1 ShuffleBytes- 00:06:08.575 [2024-07-23 10:26:56.985590] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.575 [2024-07-23 10:26:56.985705] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.575 [2024-07-23 10:26:56.985817] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.575 [2024-07-23 10:26:56.985924] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.575 [2024-07-23 10:26:56.986133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0ab383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.986158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.986210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b3b383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.986223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.986274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b3b383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.986288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:56.986342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b3b383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:56.986355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.575 #52 NEW cov: 12131 ft: 15234 corp: 31/438b lim: 30 exec/s: 52 rss: 73Mb L: 28/28 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\377\377\377\036"- 00:06:08.575 [2024-07-23 10:26:57.025618] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008181 00:06:08.575 [2024-07-23 10:26:57.025732] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100008181 00:06:08.575 [2024-07-23 10:26:57.025867] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100008181 00:06:08.575 [2024-07-23 10:26:57.025979] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100008181 00:06:08.575 [2024-07-23 10:26:57.026200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:81810281 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:57.026225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.575 [2024-07-23 10:26:57.026279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:81818181 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.575 [2024-07-23 10:26:57.026294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.576 [2024-07-23 10:26:57.026347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:81818181 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.576 [2024-07-23 10:26:57.026362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.576 [2024-07-23 10:26:57.026414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:81818181 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.576 [2024-07-23 10:26:57.026428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.576 #53 NEW cov: 12131 ft: 15238 corp: 32/466b lim: 30 exec/s: 53 rss: 73Mb L: 28/28 MS: 1 InsertByte- 00:06:08.835 [2024-07-23 10:26:57.075741] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.835 [2024-07-23 10:26:57.075957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.075982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.835 #54 NEW cov: 12131 ft: 15263 corp: 33/477b lim: 30 exec/s: 54 rss: 73Mb L: 11/28 MS: 1 EraseBytes- 00:06:08.835 [2024-07-23 10:26:57.115804] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:06:08.835 [2024-07-23 10:26:57.116012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.116037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.835 #55 NEW cov: 12131 ft: 15266 corp: 34/484b lim: 30 exec/s: 55 rss: 73Mb L: 7/28 MS: 1 EraseBytes- 00:06:08.835 [2024-07-23 10:26:57.166051] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.835 [2024-07-23 10:26:57.166165] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.835 [2024-07-23 10:26:57.166276] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.835 [2024-07-23 10:26:57.166384] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b3b3 00:06:08.835 [2024-07-23 10:26:57.166608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0ab383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.166634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.835 [2024-07-23 10:26:57.166686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b3b383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.166700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.835 [2024-07-23 10:26:57.166752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b3b383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.166765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.835 [2024-07-23 10:26:57.166821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b3b383b3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.166835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.835 #56 NEW cov: 12131 ft: 15282 corp: 35/512b lim: 30 exec/s: 56 rss: 73Mb L: 28/28 MS: 1 ChangeBit- 00:06:08.835 [2024-07-23 10:26:57.216129] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff03 00:06:08.835 [2024-07-23 10:26:57.216249] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786556) > buf size (4096) 00:06:08.835 [2024-07-23 10:26:57.216457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.216483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.835 [2024-07-23 10:26:57.216537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:001e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.216551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.835 #57 NEW cov: 12131 ft: 15288 corp: 36/524b lim: 30 exec/s: 57 rss: 73Mb L: 12/28 MS: 1 ChangeBinInt- 00:06:08.835 [2024-07-23 10:26:57.266276] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.835 [2024-07-23 10:26:57.266392] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:08.835 [2024-07-23 10:26:57.266597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff7a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.266623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.835 [2024-07-23 10:26:57.266675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.266689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.835 #58 NEW cov: 12131 ft: 15322 corp: 37/536b lim: 30 exec/s: 58 rss: 73Mb L: 12/28 MS: 1 ChangeByte- 00:06:08.835 [2024-07-23 10:26:57.306385] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:08.835 [2024-07-23 10:26:57.306500] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:08.835 [2024-07-23 10:26:57.306721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.835 [2024-07-23 10:26:57.306746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.836 [2024-07-23 10:26:57.306804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.836 [2024-07-23 10:26:57.306819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.836 #59 NEW cov: 12131 ft: 15343 corp: 38/548b lim: 30 exec/s: 59 rss: 73Mb L: 12/28 MS: 1 ShuffleBytes- 00:06:09.095 [2024-07-23 10:26:57.346566] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:09.095 [2024-07-23 10:26:57.346682] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:09.095 [2024-07-23 10:26:57.346796] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:09.095 [2024-07-23 10:26:57.346903] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047768) > buf size (4096) 00:06:09.095 [2024-07-23 10:26:57.347120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.095 [2024-07-23 10:26:57.347146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.095 [2024-07-23 10:26:57.347199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.095 [2024-07-23 10:26:57.347213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.095 [2024-07-23 10:26:57.347264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.096 [2024-07-23 10:26:57.347278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.096 [2024-07-23 10:26:57.347329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff3583ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.096 [2024-07-23 10:26:57.347343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.096 #60 NEW cov: 12131 ft: 15349 corp: 39/572b lim: 30 exec/s: 30 rss: 73Mb L: 24/28 MS: 1 ChangeASCIIInt- 00:06:09.096 #60 DONE cov: 12131 ft: 15349 corp: 39/572b lim: 30 exec/s: 30 rss: 73Mb 00:06:09.096 ###### Recommended dictionary. ###### 00:06:09.096 "\377\377\377\036" # Uses: 6 00:06:09.096 "\015\000\000\000" # Uses: 1 00:06:09.096 ###### End of recommended dictionary. ###### 00:06:09.096 Done 60 runs in 2 second(s) 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:09.096 10:26:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:09.096 [2024-07-23 10:26:57.540513] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:09.096 [2024-07-23 10:26:57.540590] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425938 ] 00:06:09.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.354 [2024-07-23 10:26:57.810243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.354 [2024-07-23 10:26:57.838218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.612 [2024-07-23 10:26:57.890982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.612 [2024-07-23 10:26:57.907314] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:09.612 INFO: Running with entropic power schedule (0xFF, 100). 00:06:09.612 INFO: Seed: 100435450 00:06:09.612 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:09.613 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:09.613 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:09.613 INFO: A corpus is not provided, starting from an empty corpus 00:06:09.613 #2 INITED exec/s: 0 rss: 64Mb 00:06:09.613 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:09.613 This may also happen if the target rejected all inputs we tried so far 00:06:09.613 [2024-07-23 10:26:57.952554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.613 [2024-07-23 10:26:57.952587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.871 NEW_FUNC[1/691]: 0x496d60 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:09.871 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:09.871 #10 NEW cov: 11820 ft: 11818 corp: 2/9b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 3 CrossOver-InsertByte-InsertRepeatedBytes- 00:06:09.871 [2024-07-23 10:26:58.293280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.871 [2024-07-23 10:26:58.293318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.871 #11 NEW cov: 11950 ft: 12448 corp: 3/17b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ChangeBit- 00:06:09.871 [2024-07-23 10:26:58.343705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.871 [2024-07-23 10:26:58.343734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.871 [2024-07-23 10:26:58.343789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:34340024 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.871 [2024-07-23 10:26:58.343804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.871 [2024-07-23 10:26:58.343857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.871 [2024-07-23 10:26:58.343870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:09.871 [2024-07-23 10:26:58.343920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.871 [2024-07-23 10:26:58.343933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:09.871 #17 NEW cov: 11956 ft: 13251 corp: 4/49b lim: 35 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:10.130 [2024-07-23 10:26:58.383409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.383436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.130 #18 NEW cov: 12041 ft: 13619 corp: 5/57b lim: 35 exec/s: 0 rss: 70Mb L: 8/32 MS: 1 ChangeByte- 00:06:10.130 [2024-07-23 10:26:58.423555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.423580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.130 #24 NEW cov: 12041 ft: 13783 corp: 6/65b lim: 35 exec/s: 0 rss: 71Mb L: 8/32 MS: 1 ShuffleBytes- 00:06:10.130 [2024-07-23 10:26:58.473691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:010000ea cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.473715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.130 #27 NEW cov: 12041 ft: 13847 corp: 7/75b lim: 35 exec/s: 0 rss: 71Mb L: 10/32 MS: 3 InsertByte-ChangeByte-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:10.130 [2024-07-23 10:26:58.514059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.514084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.514136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.514150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.514203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.514216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.130 #29 NEW cov: 12041 ft: 14104 corp: 8/99b lim: 35 exec/s: 0 rss: 71Mb L: 24/32 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:10.130 [2024-07-23 10:26:58.553900] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.130 [2024-07-23 10:26:58.554024] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.130 [2024-07-23 10:26:58.554149] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.130 [2024-07-23 10:26:58.554354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.554379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.554431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.554451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.554500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.554515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.554564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.554579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.130 #32 NEW cov: 12050 ft: 14139 corp: 9/129b lim: 35 exec/s: 0 rss: 71Mb L: 30/32 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:10.130 [2024-07-23 10:26:58.594394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.594420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.594473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2434000a cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.594487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.594538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.594552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.130 [2024-07-23 10:26:58.594602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.130 [2024-07-23 10:26:58.594615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.130 #33 NEW cov: 12050 ft: 14198 corp: 10/161b lim: 35 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 CopyPart- 00:06:10.389 [2024-07-23 10:26:58.644151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:b1002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.644175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.389 #39 NEW cov: 12050 ft: 14231 corp: 11/173b lim: 35 exec/s: 0 rss: 71Mb L: 12/32 MS: 1 InsertRepeatedBytes- 00:06:10.389 [2024-07-23 10:26:58.684525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.684549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.389 [2024-07-23 10:26:58.684602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffc9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.684616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.389 [2024-07-23 10:26:58.684668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.684682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.389 #40 NEW cov: 12050 ft: 14254 corp: 12/197b lim: 35 exec/s: 0 rss: 72Mb L: 24/32 MS: 1 ChangeByte- 00:06:10.389 [2024-07-23 10:26:58.734409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:b1002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.734433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.389 #41 NEW cov: 12050 ft: 14276 corp: 13/210b lim: 35 exec/s: 0 rss: 72Mb L: 13/32 MS: 1 InsertByte- 00:06:10.389 [2024-07-23 10:26:58.784554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:2f000a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.784578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.389 #42 NEW cov: 12050 ft: 14298 corp: 14/218b lim: 35 exec/s: 0 rss: 72Mb L: 8/32 MS: 1 ShuffleBytes- 00:06:10.389 [2024-07-23 10:26:58.834953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.834978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.389 [2024-07-23 10:26:58.835031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.835044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.389 [2024-07-23 10:26:58.835114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:2f00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.835128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.389 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:10.389 #43 NEW cov: 12073 ft: 14420 corp: 15/239b lim: 35 exec/s: 0 rss: 72Mb L: 21/32 MS: 1 CrossOver- 00:06:10.389 [2024-07-23 10:26:58.874827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f0a002f cdw11:2f004a2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.389 [2024-07-23 10:26:58.874853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.648 #44 NEW cov: 12073 ft: 14461 corp: 16/249b lim: 35 exec/s: 0 rss: 72Mb L: 10/32 MS: 1 CrossOver- 00:06:10.648 [2024-07-23 10:26:58.915196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8cff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:58.915223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.648 [2024-07-23 10:26:58.915277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:58.915292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.648 [2024-07-23 10:26:58.915344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:2f00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:58.915358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.648 #45 NEW cov: 12073 ft: 14479 corp: 17/270b lim: 35 exec/s: 45 rss: 72Mb L: 21/32 MS: 1 ChangeByte- 00:06:10.648 [2024-07-23 10:26:58.965071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f0039 cdw11:2f000a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:58.965097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.648 #46 NEW cov: 12073 ft: 14538 corp: 18/278b lim: 35 exec/s: 46 rss: 72Mb L: 8/32 MS: 1 ChangeBinInt- 00:06:10.648 [2024-07-23 10:26:59.015141] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:10.648 [2024-07-23 10:26:59.015353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:010000ea cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:59.015379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.648 [2024-07-23 10:26:59.015433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:59.015450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.648 #47 NEW cov: 12073 ft: 14721 corp: 19/296b lim: 35 exec/s: 47 rss: 72Mb L: 18/32 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:10.648 [2024-07-23 10:26:59.065368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f0039 cdw11:2f000a41 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:59.065393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.648 #48 NEW cov: 12073 ft: 14731 corp: 20/304b lim: 35 exec/s: 48 rss: 72Mb L: 8/32 MS: 1 ChangeByte- 00:06:10.648 [2024-07-23 10:26:59.115501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:b1002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.648 [2024-07-23 10:26:59.115527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.648 #54 NEW cov: 12073 ft: 14738 corp: 21/317b lim: 35 exec/s: 54 rss: 72Mb L: 13/32 MS: 1 InsertByte- 00:06:10.907 [2024-07-23 10:26:59.155603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f0039 cdw11:2f000a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.907 [2024-07-23 10:26:59.155629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.907 #55 NEW cov: 12073 ft: 14747 corp: 22/327b lim: 35 exec/s: 55 rss: 72Mb L: 10/32 MS: 1 CopyPart- 00:06:10.907 [2024-07-23 10:26:59.195728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f003f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.907 [2024-07-23 10:26:59.195753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.907 #56 NEW cov: 12073 ft: 14750 corp: 23/335b lim: 35 exec/s: 56 rss: 72Mb L: 8/32 MS: 1 ChangeBit- 00:06:10.907 [2024-07-23 10:26:59.245864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2fb1002f cdw11:b100b1b1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.907 [2024-07-23 10:26:59.245889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.907 #57 NEW cov: 12073 ft: 14756 corp: 24/344b lim: 35 exec/s: 57 rss: 72Mb L: 9/32 MS: 1 EraseBytes- 00:06:10.907 [2024-07-23 10:26:59.285944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ce2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.907 [2024-07-23 10:26:59.285971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.907 #58 NEW cov: 12073 ft: 14761 corp: 25/352b lim: 35 exec/s: 58 rss: 72Mb L: 8/32 MS: 1 ChangeBinInt- 00:06:10.907 [2024-07-23 10:26:59.326402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffc9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.907 [2024-07-23 10:26:59.326428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.907 [2024-07-23 10:26:59.326480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.907 [2024-07-23 10:26:59.326497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.907 #59 NEW cov: 12073 ft: 15126 corp: 26/376b lim: 35 exec/s: 59 rss: 73Mb L: 24/32 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:10.907 [2024-07-23 10:26:59.376530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:595900ea cdw11:59005959 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.907 [2024-07-23 10:26:59.376554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.167 #60 NEW cov: 12073 ft: 15527 corp: 27/400b lim: 35 exec/s: 60 rss: 73Mb L: 24/32 MS: 1 InsertRepeatedBytes- 00:06:11.167 [2024-07-23 10:26:59.426637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.426662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.426717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.426731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.426789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.426803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.167 #61 NEW cov: 12073 ft: 15534 corp: 28/424b lim: 35 exec/s: 61 rss: 73Mb L: 24/32 MS: 1 ShuffleBytes- 00:06:11.167 [2024-07-23 10:26:59.466841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.466866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.466920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.466934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.466986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.467000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.467054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.467068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.167 #62 NEW cov: 12073 ft: 15585 corp: 29/458b lim: 35 exec/s: 62 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:06:11.167 [2024-07-23 10:26:59.506982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:2f000a2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.507006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.507077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a340024 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.507091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.507148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.507162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.507217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.507230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.167 #63 NEW cov: 12073 ft: 15593 corp: 30/490b lim: 35 exec/s: 63 rss: 73Mb L: 32/34 MS: 1 ShuffleBytes- 00:06:11.167 [2024-07-23 10:26:59.547071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:2f000a2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.547095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.547150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a340024 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.547164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.547215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.547229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.547281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:34340034 cdw11:cc003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.547295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.167 #64 NEW cov: 12073 ft: 15628 corp: 31/522b lim: 35 exec/s: 64 rss: 73Mb L: 32/34 MS: 1 ChangeBinInt- 00:06:11.167 [2024-07-23 10:26:59.597123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.597147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.597201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.597215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.167 [2024-07-23 10:26:59.597268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ff98 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.167 [2024-07-23 10:26:59.597281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.167 #65 NEW cov: 12073 ft: 15640 corp: 32/547b lim: 35 exec/s: 65 rss: 73Mb L: 25/34 MS: 1 InsertByte- 00:06:11.167 [2024-07-23 10:26:59.637347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:2f000a2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.168 [2024-07-23 10:26:59.637371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.168 [2024-07-23 10:26:59.637428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a340024 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.168 [2024-07-23 10:26:59.637443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.168 [2024-07-23 10:26:59.637509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:34340034 cdw11:34003420 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.168 [2024-07-23 10:26:59.637526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.168 [2024-07-23 10:26:59.637582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.168 [2024-07-23 10:26:59.637595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.168 #66 NEW cov: 12073 ft: 15648 corp: 33/579b lim: 35 exec/s: 66 rss: 73Mb L: 32/34 MS: 1 ChangeBinInt- 00:06:11.427 [2024-07-23 10:26:59.677225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f0039 cdw11:39000a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.677250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.427 [2024-07-23 10:26:59.677302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a6e002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.677317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.427 #67 NEW cov: 12073 ft: 15676 corp: 34/599b lim: 35 exec/s: 67 rss: 73Mb L: 20/34 MS: 1 CopyPart- 00:06:11.427 [2024-07-23 10:26:59.727345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:b1002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.727370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.427 [2024-07-23 10:26:59.727421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:7eb100b1 cdw11:6e000a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.727435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.427 #68 NEW cov: 12073 ft: 15678 corp: 35/613b lim: 35 exec/s: 68 rss: 73Mb L: 14/34 MS: 1 InsertByte- 00:06:11.427 [2024-07-23 10:26:59.777771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.777799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.427 [2024-07-23 10:26:59.777852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2434000a cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.777866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.427 [2024-07-23 10:26:59.777915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.777929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.427 [2024-07-23 10:26:59.777978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.777991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.427 #69 NEW cov: 12073 ft: 15737 corp: 36/645b lim: 35 exec/s: 69 rss: 73Mb L: 32/34 MS: 1 ShuffleBytes- 00:06:11.427 [2024-07-23 10:26:59.827655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f0039 cdw11:2f000a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.827679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.427 #70 NEW cov: 12073 ft: 15738 corp: 37/663b lim: 35 exec/s: 70 rss: 73Mb L: 18/34 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:11.427 [2024-07-23 10:26:59.867619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f00ff cdw11:0a002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.427 [2024-07-23 10:26:59.867644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.427 #76 NEW cov: 12073 ft: 15757 corp: 38/671b lim: 35 exec/s: 76 rss: 73Mb L: 8/34 MS: 1 ChangeByte- 00:06:11.427 [2024-07-23 10:26:59.907663] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:11.428 [2024-07-23 10:26:59.907912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:010000ea cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.428 [2024-07-23 10:26:59.907938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.428 [2024-07-23 10:26:59.907990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.428 [2024-07-23 10:26:59.908006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.687 #77 NEW cov: 12073 ft: 15761 corp: 39/690b lim: 35 exec/s: 77 rss: 73Mb L: 19/34 MS: 1 InsertByte- 00:06:11.687 [2024-07-23 10:26:59.948193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2f2f002f cdw11:2f000a2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.687 [2024-07-23 10:26:59.948218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.687 [2024-07-23 10:26:59.948271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a340099 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.687 [2024-07-23 10:26:59.948284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.687 [2024-07-23 10:26:59.948336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:34340034 cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.687 [2024-07-23 10:26:59.948348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.687 [2024-07-23 10:26:59.948398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:34340034 cdw11:cc003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.687 [2024-07-23 10:26:59.948411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.687 #78 NEW cov: 12073 ft: 15767 corp: 40/722b lim: 35 exec/s: 39 rss: 73Mb L: 32/34 MS: 1 ChangeByte- 00:06:11.687 #78 DONE cov: 12073 ft: 15767 corp: 40/722b lim: 35 exec/s: 39 rss: 73Mb 00:06:11.687 ###### Recommended dictionary. ###### 00:06:11.687 "\001\000\000\000\000\000\000\000" # Uses: 2 00:06:11.687 "\001\000\000\000" # Uses: 0 00:06:11.687 ###### End of recommended dictionary. ###### 00:06:11.687 Done 78 runs in 2 second(s) 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:11.687 10:27:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:11.687 [2024-07-23 10:27:00.156672] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:11.687 [2024-07-23 10:27:00.156747] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426315 ] 00:06:11.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.946 [2024-07-23 10:27:00.434838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.210 [2024-07-23 10:27:00.460962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.210 [2024-07-23 10:27:00.513606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.210 [2024-07-23 10:27:00.529931] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:12.210 INFO: Running with entropic power schedule (0xFF, 100). 00:06:12.210 INFO: Seed: 2721503417 00:06:12.210 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:12.210 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:12.210 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:12.210 INFO: A corpus is not provided, starting from an empty corpus 00:06:12.210 #2 INITED exec/s: 0 rss: 63Mb 00:06:12.210 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:12.210 This may also happen if the target rejected all inputs we tried so far 00:06:12.469 NEW_FUNC[1/680]: 0x498a30 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:12.469 NEW_FUNC[2/680]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:12.469 #6 NEW cov: 11731 ft: 11705 corp: 2/10b lim: 20 exec/s: 0 rss: 70Mb L: 9/9 MS: 4 ShuffleBytes-ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:12.728 #12 NEW cov: 11861 ft: 12234 corp: 3/20b lim: 20 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:06:12.728 #13 NEW cov: 11867 ft: 12552 corp: 4/30b lim: 20 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeByte- 00:06:12.728 #18 NEW cov: 11956 ft: 13051 corp: 5/44b lim: 20 exec/s: 0 rss: 71Mb L: 14/14 MS: 5 CMP-ShuffleBytes-CrossOver-ShuffleBytes-InsertRepeatedBytes- DE: "\377\036"- 00:06:12.728 #19 NEW cov: 11956 ft: 13133 corp: 6/58b lim: 20 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 PersAutoDict- DE: "\377\036"- 00:06:12.728 #20 NEW cov: 11956 ft: 13294 corp: 7/69b lim: 20 exec/s: 0 rss: 71Mb L: 11/14 MS: 1 InsertByte- 00:06:12.986 #21 NEW cov: 11956 ft: 13332 corp: 8/80b lim: 20 exec/s: 0 rss: 72Mb L: 11/14 MS: 1 ChangeASCIIInt- 00:06:12.986 #22 NEW cov: 11956 ft: 13443 corp: 9/91b lim: 20 exec/s: 0 rss: 72Mb L: 11/14 MS: 1 PersAutoDict- DE: "\377\036"- 00:06:12.986 #23 NEW cov: 11956 ft: 13457 corp: 10/101b lim: 20 exec/s: 0 rss: 72Mb L: 10/14 MS: 1 ChangeBit- 00:06:12.986 #24 NEW cov: 11956 ft: 13540 corp: 11/112b lim: 20 exec/s: 0 rss: 72Mb L: 11/14 MS: 1 ShuffleBytes- 00:06:13.244 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:13.244 #25 NEW cov: 11979 ft: 13823 corp: 12/117b lim: 20 exec/s: 0 rss: 72Mb L: 5/14 MS: 1 CrossOver- 00:06:13.244 #26 NEW cov: 11979 ft: 13836 corp: 13/127b lim: 20 exec/s: 0 rss: 72Mb L: 10/14 MS: 1 CrossOver- 00:06:13.244 #27 NEW cov: 11979 ft: 13851 corp: 14/134b lim: 20 exec/s: 27 rss: 72Mb L: 7/14 MS: 1 EraseBytes- 00:06:13.244 #28 NEW cov: 11979 ft: 13871 corp: 15/147b lim: 20 exec/s: 28 rss: 72Mb L: 13/14 MS: 1 InsertRepeatedBytes- 00:06:13.244 #29 NEW cov: 11979 ft: 13949 corp: 16/158b lim: 20 exec/s: 29 rss: 72Mb L: 11/14 MS: 1 ChangeByte- 00:06:13.503 #30 NEW cov: 11979 ft: 13970 corp: 17/168b lim: 20 exec/s: 30 rss: 72Mb L: 10/14 MS: 1 ShuffleBytes- 00:06:13.503 #31 NEW cov: 11979 ft: 13993 corp: 18/175b lim: 20 exec/s: 31 rss: 72Mb L: 7/14 MS: 1 CopyPart- 00:06:13.503 #37 NEW cov: 11996 ft: 14207 corp: 19/191b lim: 20 exec/s: 37 rss: 72Mb L: 16/16 MS: 1 PersAutoDict- DE: "\377\036"- 00:06:13.503 #38 NEW cov: 11996 ft: 14214 corp: 20/209b lim: 20 exec/s: 38 rss: 72Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:13.503 #39 NEW cov: 11996 ft: 14239 corp: 21/223b lim: 20 exec/s: 39 rss: 72Mb L: 14/18 MS: 1 CopyPart- 00:06:13.762 #45 NEW cov: 11996 ft: 14319 corp: 22/237b lim: 20 exec/s: 45 rss: 72Mb L: 14/18 MS: 1 ChangeBinInt- 00:06:13.762 #46 NEW cov: 11996 ft: 14329 corp: 23/248b lim: 20 exec/s: 46 rss: 72Mb L: 11/18 MS: 1 InsertByte- 00:06:13.762 #47 NEW cov: 11996 ft: 14335 corp: 24/256b lim: 20 exec/s: 47 rss: 73Mb L: 8/18 MS: 1 EraseBytes- 00:06:13.762 #48 NEW cov: 11996 ft: 14349 corp: 25/263b lim: 20 exec/s: 48 rss: 73Mb L: 7/18 MS: 1 ChangeByte- 00:06:14.021 #52 NEW cov: 11996 ft: 14395 corp: 26/273b lim: 20 exec/s: 52 rss: 73Mb L: 10/18 MS: 4 CrossOver-ChangeByte-InsertByte-InsertRepeatedBytes- 00:06:14.021 #53 NEW cov: 11996 ft: 14436 corp: 27/283b lim: 20 exec/s: 53 rss: 73Mb L: 10/18 MS: 1 CrossOver- 00:06:14.021 #54 NEW cov: 11996 ft: 14453 corp: 28/293b lim: 20 exec/s: 54 rss: 73Mb L: 10/18 MS: 1 ShuffleBytes- 00:06:14.021 #55 NEW cov: 11996 ft: 14457 corp: 29/299b lim: 20 exec/s: 55 rss: 73Mb L: 6/18 MS: 1 EraseBytes- 00:06:14.021 #56 NEW cov: 11996 ft: 14469 corp: 30/312b lim: 20 exec/s: 56 rss: 73Mb L: 13/18 MS: 1 PersAutoDict- DE: "\377\036"- 00:06:14.281 #57 NEW cov: 11996 ft: 14512 corp: 31/332b lim: 20 exec/s: 57 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:06:14.281 #58 NEW cov: 11996 ft: 14529 corp: 32/345b lim: 20 exec/s: 29 rss: 73Mb L: 13/20 MS: 1 ChangeBit- 00:06:14.281 #58 DONE cov: 11996 ft: 14529 corp: 32/345b lim: 20 exec/s: 29 rss: 73Mb 00:06:14.281 ###### Recommended dictionary. ###### 00:06:14.281 "\377\036" # Uses: 4 00:06:14.281 ###### End of recommended dictionary. ###### 00:06:14.281 Done 58 runs in 2 second(s) 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:14.281 10:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:14.281 [2024-07-23 10:27:02.756935] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:14.281 [2024-07-23 10:27:02.757009] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426796 ] 00:06:14.540 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.799 [2024-07-23 10:27:03.053723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.799 [2024-07-23 10:27:03.081214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.799 [2024-07-23 10:27:03.134137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.799 [2024-07-23 10:27:03.150457] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:14.799 INFO: Running with entropic power schedule (0xFF, 100). 00:06:14.799 INFO: Seed: 1048499511 00:06:14.799 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:14.799 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:14.799 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:14.799 INFO: A corpus is not provided, starting from an empty corpus 00:06:14.799 #2 INITED exec/s: 0 rss: 64Mb 00:06:14.799 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:14.799 This may also happen if the target rejected all inputs we tried so far 00:06:14.799 [2024-07-23 10:27:03.229160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.799 [2024-07-23 10:27:03.229211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.799 [2024-07-23 10:27:03.229359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.799 [2024-07-23 10:27:03.229381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.799 [2024-07-23 10:27:03.229506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.799 [2024-07-23 10:27:03.229526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.799 [2024-07-23 10:27:03.229661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.799 [2024-07-23 10:27:03.229681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.057 NEW_FUNC[1/692]: 0x499b20 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:15.057 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:15.057 #4 NEW cov: 11841 ft: 11842 corp: 2/35b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:15.316 [2024-07-23 10:27:03.589269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.589317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.589425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.589446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.589547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.589567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.316 #11 NEW cov: 11971 ft: 12670 corp: 3/58b lim: 35 exec/s: 0 rss: 70Mb L: 23/34 MS: 2 InsertByte-CrossOver- 00:06:15.316 [2024-07-23 10:27:03.639313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.639340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.639439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.639455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.639539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e7d8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.639554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.316 #12 NEW cov: 11977 ft: 12997 corp: 4/81b lim: 35 exec/s: 0 rss: 70Mb L: 23/34 MS: 1 ChangeByte- 00:06:15.316 [2024-07-23 10:27:03.699932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.699958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.700047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.700063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.700172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.700188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.700275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.700290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.316 #15 NEW cov: 12062 ft: 13195 corp: 5/114b lim: 35 exec/s: 0 rss: 70Mb L: 33/34 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:06:15.316 [2024-07-23 10:27:03.749872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.749898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.750004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e488e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.750020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.316 [2024-07-23 10:27:03.750109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e7d8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.750124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.316 #16 NEW cov: 12062 ft: 13404 corp: 6/137b lim: 35 exec/s: 0 rss: 70Mb L: 23/34 MS: 1 ChangeByte- 00:06:15.316 [2024-07-23 10:27:03.810410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.316 [2024-07-23 10:27:03.810437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.317 [2024-07-23 10:27:03.810525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.317 [2024-07-23 10:27:03.810541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.317 [2024-07-23 10:27:03.810628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.317 [2024-07-23 10:27:03.810643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.317 [2024-07-23 10:27:03.810734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.317 [2024-07-23 10:27:03.810749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.576 #17 NEW cov: 12062 ft: 13440 corp: 7/170b lim: 35 exec/s: 0 rss: 71Mb L: 33/34 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:15.576 [2024-07-23 10:27:03.870645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.870674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:03.870771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.870801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:03.870909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8eff8e8e cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.870924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:03.871007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.871023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.576 #18 NEW cov: 12062 ft: 13513 corp: 8/202b lim: 35 exec/s: 0 rss: 71Mb L: 32/34 MS: 1 CrossOver- 00:06:15.576 [2024-07-23 10:27:03.920441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.920467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:03.920557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e488e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.920573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:03.920660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e7d8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.920674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.576 #19 NEW cov: 12062 ft: 13594 corp: 9/225b lim: 35 exec/s: 0 rss: 71Mb L: 23/34 MS: 1 ShuffleBytes- 00:06:15.576 [2024-07-23 10:27:03.980541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.980569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:03.980672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e488e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.980689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:03.980782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:948e7d8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:03.980799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.576 #20 NEW cov: 12062 ft: 13611 corp: 10/248b lim: 35 exec/s: 0 rss: 71Mb L: 23/34 MS: 1 ChangeBinInt- 00:06:15.576 [2024-07-23 10:27:04.041215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:04.041241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:04.041332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:04.041348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:04.041436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:04.041450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.576 [2024-07-23 10:27:04.041544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.576 [2024-07-23 10:27:04.041559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.576 #21 NEW cov: 12062 ft: 13701 corp: 11/281b lim: 35 exec/s: 0 rss: 71Mb L: 33/34 MS: 1 ChangeBit- 00:06:15.835 [2024-07-23 10:27:04.091182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.091210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.835 [2024-07-23 10:27:04.091302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.091319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.835 [2024-07-23 10:27:04.091410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.091435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.835 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:15.835 #22 NEW cov: 12085 ft: 13763 corp: 12/305b lim: 35 exec/s: 0 rss: 71Mb L: 24/34 MS: 1 InsertByte- 00:06:15.835 [2024-07-23 10:27:04.141401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.141429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.835 [2024-07-23 10:27:04.141526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff8effff cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.141542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.835 [2024-07-23 10:27:04.141632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e7d8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.141649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.835 #23 NEW cov: 12085 ft: 13786 corp: 13/328b lim: 35 exec/s: 0 rss: 71Mb L: 23/34 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:15.835 [2024-07-23 10:27:04.190790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a4015cf cdw11:e4150003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.190826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.835 #28 NEW cov: 12085 ft: 14567 corp: 14/338b lim: 35 exec/s: 28 rss: 71Mb L: 10/34 MS: 5 InsertByte-InsertByte-InsertByte-InsertByte-CopyPart- 00:06:15.835 [2024-07-23 10:27:04.242115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a4015cf cdw11:e4150003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.242143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.835 [2024-07-23 10:27:04.242239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e0a40 cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.835 [2024-07-23 10:27:04.242255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.836 [2024-07-23 10:27:04.242352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e480001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.836 [2024-07-23 10:27:04.242368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.836 [2024-07-23 10:27:04.242463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:8e7de48e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.836 [2024-07-23 10:27:04.242480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.836 #29 NEW cov: 12085 ft: 14645 corp: 15/371b lim: 35 exec/s: 29 rss: 72Mb L: 33/34 MS: 1 CrossOver- 00:06:15.836 [2024-07-23 10:27:04.311999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.836 [2024-07-23 10:27:04.312027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.836 [2024-07-23 10:27:04.312119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.836 [2024-07-23 10:27:04.312138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.836 [2024-07-23 10:27:04.312231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:418e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.836 [2024-07-23 10:27:04.312246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.094 #30 NEW cov: 12085 ft: 14677 corp: 16/395b lim: 35 exec/s: 30 rss: 72Mb L: 24/34 MS: 1 ChangeByte- 00:06:16.094 [2024-07-23 10:27:04.382632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000fd00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.382662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.382757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.382782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.382877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.382894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.382993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.383012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.094 #33 NEW cov: 12085 ft: 14687 corp: 17/424b lim: 35 exec/s: 33 rss: 72Mb L: 29/34 MS: 3 ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:06:16.094 [2024-07-23 10:27:04.432058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a4015cf cdw11:e4150003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.432086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.432171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e4ff0a40 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.432189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.094 #34 NEW cov: 12085 ft: 14908 corp: 18/438b lim: 35 exec/s: 34 rss: 72Mb L: 14/34 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:16.094 [2024-07-23 10:27:04.483089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000fd00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.483121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.483225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.483242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.483337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.483353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.483454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.483472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.094 #35 NEW cov: 12085 ft: 14998 corp: 19/467b lim: 35 exec/s: 35 rss: 72Mb L: 29/34 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:16.094 [2024-07-23 10:27:04.552857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.552890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.552983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e488e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.094 [2024-07-23 10:27:04.552999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.094 [2024-07-23 10:27:04.553090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:948e7d8e cdw11:75710001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.095 [2024-07-23 10:27:04.553106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.095 #36 NEW cov: 12085 ft: 15024 corp: 20/490b lim: 35 exec/s: 36 rss: 72Mb L: 23/34 MS: 1 ChangeBinInt- 00:06:16.354 [2024-07-23 10:27:04.623213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.623246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.354 [2024-07-23 10:27:04.623335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:488e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.623353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.354 [2024-07-23 10:27:04.623447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e7d cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.623465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.354 #37 NEW cov: 12085 ft: 15047 corp: 21/514b lim: 35 exec/s: 37 rss: 72Mb L: 24/34 MS: 1 InsertByte- 00:06:16.354 [2024-07-23 10:27:04.674021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000fd00 cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.674050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.354 [2024-07-23 10:27:04.674151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e778e cdw11:8e480001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.674168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.354 [2024-07-23 10:27:04.674266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7d8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.674282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.354 [2024-07-23 10:27:04.674375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.674390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.354 #38 NEW cov: 12085 ft: 15065 corp: 22/543b lim: 35 exec/s: 38 rss: 72Mb L: 29/34 MS: 1 CrossOver- 00:06:16.354 [2024-07-23 10:27:04.724312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000fd00 cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-07-23 10:27:04.724339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.355 [2024-07-23 10:27:04.724433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e7f8e cdw11:8e480001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.724447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.355 [2024-07-23 10:27:04.724544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7d8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.724559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.355 [2024-07-23 10:27:04.724660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.724674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.355 #39 NEW cov: 12085 ft: 15133 corp: 23/572b lim: 35 exec/s: 39 rss: 72Mb L: 29/34 MS: 1 ChangeBit- 00:06:16.355 [2024-07-23 10:27:04.784432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.784459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.355 [2024-07-23 10:27:04.784546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.784561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.355 [2024-07-23 10:27:04.784644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff28ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.784661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.355 [2024-07-23 10:27:04.784756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.784771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.355 #40 NEW cov: 12085 ft: 15201 corp: 24/605b lim: 35 exec/s: 40 rss: 72Mb L: 33/34 MS: 1 ChangeByte- 00:06:16.355 [2024-07-23 10:27:04.843526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8efd8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.355 [2024-07-23 10:27:04.843551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.614 #43 NEW cov: 12085 ft: 15206 corp: 25/612b lim: 35 exec/s: 43 rss: 72Mb L: 7/34 MS: 3 CrossOver-ChangeBit-CrossOver- 00:06:16.614 [2024-07-23 10:27:04.894949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.894979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:04.895087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.895108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:04.895206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e5c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.895222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:04.895306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.895323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.614 #44 NEW cov: 12085 ft: 15232 corp: 26/646b lim: 35 exec/s: 44 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:06:16.614 [2024-07-23 10:27:04.965176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000fd00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.965203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:04.965292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.965309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:04.965403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.965420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:04.965507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:04.965523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.614 #45 NEW cov: 12085 ft: 15248 corp: 27/679b lim: 35 exec/s: 45 rss: 72Mb L: 33/34 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:16.614 [2024-07-23 10:27:05.035016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:05.035044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:05.035143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e608e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:05.035159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:05.035246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e418e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:05.035262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.614 #46 NEW cov: 12085 ft: 15257 corp: 28/704b lim: 35 exec/s: 46 rss: 72Mb L: 25/34 MS: 1 InsertByte- 00:06:16.614 [2024-07-23 10:27:05.105572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.614 [2024-07-23 10:27:05.105600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.614 [2024-07-23 10:27:05.105709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.615 [2024-07-23 10:27:05.105731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.615 [2024-07-23 10:27:05.105832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e7d8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.615 [2024-07-23 10:27:05.105848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.615 [2024-07-23 10:27:05.105937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.615 [2024-07-23 10:27:05.105952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.874 #47 NEW cov: 12085 ft: 15267 corp: 29/737b lim: 35 exec/s: 47 rss: 73Mb L: 33/34 MS: 1 CopyPart- 00:06:16.874 [2024-07-23 10:27:05.155369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.874 [2024-07-23 10:27:05.155398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.874 [2024-07-23 10:27:05.155505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e488e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.874 [2024-07-23 10:27:05.155523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.874 [2024-07-23 10:27:05.155614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:948e7d8e cdw11:8e8e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.874 [2024-07-23 10:27:05.155630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.874 #48 NEW cov: 12085 ft: 15286 corp: 30/760b lim: 35 exec/s: 48 rss: 73Mb L: 23/34 MS: 1 CrossOver- 00:06:16.874 [2024-07-23 10:27:05.205990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.874 [2024-07-23 10:27:05.206019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.874 [2024-07-23 10:27:05.206120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.874 [2024-07-23 10:27:05.206138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.874 [2024-07-23 10:27:05.206232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.874 [2024-07-23 10:27:05.206249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.874 [2024-07-23 10:27:05.206339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffbe cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.874 [2024-07-23 10:27:05.206355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.874 #49 NEW cov: 12085 ft: 15294 corp: 31/793b lim: 35 exec/s: 24 rss: 73Mb L: 33/34 MS: 1 ChangeByte- 00:06:16.874 #49 DONE cov: 12085 ft: 15294 corp: 31/793b lim: 35 exec/s: 24 rss: 73Mb 00:06:16.874 ###### Recommended dictionary. ###### 00:06:16.874 "\377\377\377\377" # Uses: 4 00:06:16.874 ###### End of recommended dictionary. ###### 00:06:16.874 Done 49 runs in 2 second(s) 00:06:16.874 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:16.875 10:27:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:17.134 [2024-07-23 10:27:05.389925] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:17.134 [2024-07-23 10:27:05.390001] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427445 ] 00:06:17.134 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.393 [2024-07-23 10:27:05.655713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.393 [2024-07-23 10:27:05.686323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.393 [2024-07-23 10:27:05.739332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.393 [2024-07-23 10:27:05.755665] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:17.393 INFO: Running with entropic power schedule (0xFF, 100). 00:06:17.393 INFO: Seed: 3654499328 00:06:17.393 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:17.393 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:17.393 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:17.393 INFO: A corpus is not provided, starting from an empty corpus 00:06:17.393 #2 INITED exec/s: 0 rss: 63Mb 00:06:17.393 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:17.393 This may also happen if the target rejected all inputs we tried so far 00:06:17.393 [2024-07-23 10:27:05.833078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.393 [2024-07-23 10:27:05.833126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.393 [2024-07-23 10:27:05.833244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.393 [2024-07-23 10:27:05.833265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.962 NEW_FUNC[1/692]: 0x49bcb0 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:17.962 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:17.962 #4 NEW cov: 11852 ft: 11845 corp: 2/21b lim: 45 exec/s: 0 rss: 70Mb L: 20/20 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:17.962 [2024-07-23 10:27:06.193869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.193918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.194026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.194047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.962 #5 NEW cov: 11982 ft: 12540 corp: 3/42b lim: 45 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 CrossOver- 00:06:17.962 [2024-07-23 10:27:06.245274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.245305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.245407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.245425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.245521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.245537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.245625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.245641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.245739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.245754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.962 #6 NEW cov: 11988 ft: 13219 corp: 4/87b lim: 45 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:17.962 [2024-07-23 10:27:06.314495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.314526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.314625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.314642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.962 #7 NEW cov: 12073 ft: 13418 corp: 5/107b lim: 45 exec/s: 0 rss: 71Mb L: 20/45 MS: 1 ChangeByte- 00:06:17.962 [2024-07-23 10:27:06.375587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.375615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.375711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.375728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.375824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.375839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.375932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.375947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.962 #11 NEW cov: 12073 ft: 13481 corp: 6/148b lim: 45 exec/s: 0 rss: 71Mb L: 41/45 MS: 4 CopyPart-CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:17.962 [2024-07-23 10:27:06.426219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.426249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.426346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.426362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.962 [2024-07-23 10:27:06.426460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.962 [2024-07-23 10:27:06.426476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.963 [2024-07-23 10:27:06.426565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.963 [2024-07-23 10:27:06.426581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.963 [2024-07-23 10:27:06.426682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.963 [2024-07-23 10:27:06.426696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.963 #12 NEW cov: 12073 ft: 13521 corp: 7/193b lim: 45 exec/s: 0 rss: 71Mb L: 45/45 MS: 1 CopyPart- 00:06:18.222 [2024-07-23 10:27:06.496041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.222 [2024-07-23 10:27:06.496072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.222 [2024-07-23 10:27:06.496154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.222 [2024-07-23 10:27:06.496172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.222 [2024-07-23 10:27:06.496261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.222 [2024-07-23 10:27:06.496276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.222 [2024-07-23 10:27:06.496369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.222 [2024-07-23 10:27:06.496389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.222 #13 NEW cov: 12073 ft: 13603 corp: 8/229b lim: 45 exec/s: 0 rss: 71Mb L: 36/45 MS: 1 EraseBytes- 00:06:18.222 [2024-07-23 10:27:06.555463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.222 [2024-07-23 10:27:06.555489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.222 [2024-07-23 10:27:06.555577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffc00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.222 [2024-07-23 10:27:06.555593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.222 #14 NEW cov: 12073 ft: 13666 corp: 9/250b lim: 45 exec/s: 0 rss: 72Mb L: 21/45 MS: 1 InsertByte- 00:06:18.222 [2024-07-23 10:27:06.615936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.222 [2024-07-23 10:27:06.615964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.222 [2024-07-23 10:27:06.616067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.223 [2024-07-23 10:27:06.616083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.223 #15 NEW cov: 12073 ft: 13712 corp: 10/271b lim: 45 exec/s: 0 rss: 72Mb L: 21/45 MS: 1 ShuffleBytes- 00:06:18.223 [2024-07-23 10:27:06.666139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.223 [2024-07-23 10:27:06.666166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.223 [2024-07-23 10:27:06.666264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffc00007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.223 [2024-07-23 10:27:06.666280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.223 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:18.223 #16 NEW cov: 12096 ft: 13747 corp: 11/292b lim: 45 exec/s: 0 rss: 72Mb L: 21/45 MS: 1 ChangeBit- 00:06:18.483 [2024-07-23 10:27:06.726495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.726524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.483 [2024-07-23 10:27:06.726609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.726625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.483 #17 NEW cov: 12096 ft: 13774 corp: 12/312b lim: 45 exec/s: 0 rss: 72Mb L: 20/45 MS: 1 ChangeBit- 00:06:18.483 [2024-07-23 10:27:06.776462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.776488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.483 [2024-07-23 10:27:06.776584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.776602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.483 #23 NEW cov: 12096 ft: 13820 corp: 13/332b lim: 45 exec/s: 23 rss: 72Mb L: 20/45 MS: 1 ChangeBit- 00:06:18.483 [2024-07-23 10:27:06.836660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.836686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.483 [2024-07-23 10:27:06.836818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.836835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.483 #24 NEW cov: 12096 ft: 13912 corp: 14/353b lim: 45 exec/s: 24 rss: 72Mb L: 21/45 MS: 1 CopyPart- 00:06:18.483 [2024-07-23 10:27:06.887685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.887712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.483 [2024-07-23 10:27:06.887817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.887834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.483 [2024-07-23 10:27:06.887923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.887937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.483 [2024-07-23 10:27:06.888028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.888043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.483 #25 NEW cov: 12096 ft: 13978 corp: 15/394b lim: 45 exec/s: 25 rss: 72Mb L: 41/45 MS: 1 CopyPart- 00:06:18.483 [2024-07-23 10:27:06.947072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebffff cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.947099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.483 [2024-07-23 10:27:06.947200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebffebeb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.483 [2024-07-23 10:27:06.947217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.483 #26 NEW cov: 12096 ft: 14010 corp: 16/415b lim: 45 exec/s: 26 rss: 72Mb L: 21/45 MS: 1 CrossOver- 00:06:18.746 [2024-07-23 10:27:07.007271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.007298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.007409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.007426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.746 #27 NEW cov: 12096 ft: 14049 corp: 17/435b lim: 45 exec/s: 27 rss: 72Mb L: 20/45 MS: 1 ChangeByte- 00:06:18.746 [2024-07-23 10:27:07.068354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.068382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.068469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.068484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.068580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.068599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.068697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.068713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.746 #33 NEW cov: 12096 ft: 14078 corp: 18/477b lim: 45 exec/s: 33 rss: 72Mb L: 42/45 MS: 1 InsertByte- 00:06:18.746 [2024-07-23 10:27:07.128906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.128934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.129025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.129041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.129130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.129145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.129238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.129254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.129341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.129355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.746 #34 NEW cov: 12096 ft: 14146 corp: 19/522b lim: 45 exec/s: 34 rss: 72Mb L: 45/45 MS: 1 ShuffleBytes- 00:06:18.746 [2024-07-23 10:27:07.178006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebffff cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.178033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.178133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebffebeb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.178148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.746 #35 NEW cov: 12096 ft: 14169 corp: 20/543b lim: 45 exec/s: 35 rss: 72Mb L: 21/45 MS: 1 CopyPart- 00:06:18.746 [2024-07-23 10:27:07.238250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.238277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.746 [2024-07-23 10:27:07.238366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.746 [2024-07-23 10:27:07.238382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.004 #36 NEW cov: 12096 ft: 14185 corp: 21/564b lim: 45 exec/s: 36 rss: 73Mb L: 21/45 MS: 1 CopyPart- 00:06:19.004 [2024-07-23 10:27:07.299324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebeb2b cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.299352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.004 [2024-07-23 10:27:07.299444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.299460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.004 [2024-07-23 10:27:07.299557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.299573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.004 [2024-07-23 10:27:07.299659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ebebf6eb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.299674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.004 #37 NEW cov: 12096 ft: 14222 corp: 22/607b lim: 45 exec/s: 37 rss: 73Mb L: 43/45 MS: 1 InsertByte- 00:06:19.004 [2024-07-23 10:27:07.358695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.358724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.004 [2024-07-23 10:27:07.358819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.358835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.004 #38 NEW cov: 12096 ft: 14232 corp: 23/626b lim: 45 exec/s: 38 rss: 73Mb L: 19/45 MS: 1 EraseBytes- 00:06:19.004 [2024-07-23 10:27:07.418919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3f00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.418946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.004 [2024-07-23 10:27:07.419047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.419064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.004 #39 NEW cov: 12096 ft: 14240 corp: 24/646b lim: 45 exec/s: 39 rss: 73Mb L: 20/45 MS: 1 CMP- DE: "?\000\000\000\000\000\000\000"- 00:06:19.004 [2024-07-23 10:27:07.479100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebf4ffff cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.479132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.004 [2024-07-23 10:27:07.479234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.004 [2024-07-23 10:27:07.479250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.005 #40 NEW cov: 12096 ft: 14258 corp: 25/668b lim: 45 exec/s: 40 rss: 73Mb L: 22/45 MS: 1 InsertByte- 00:06:19.264 [2024-07-23 10:27:07.530265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.530294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.530389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.530405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.530503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.530519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.530608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ebebebeb cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.530623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.530725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffebeb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.530740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:19.264 #41 NEW cov: 12096 ft: 14277 corp: 26/713b lim: 45 exec/s: 41 rss: 73Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:19.264 [2024-07-23 10:27:07.579520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffefffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.579547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.579643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.579659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.264 #42 NEW cov: 12096 ft: 14305 corp: 27/734b lim: 45 exec/s: 42 rss: 73Mb L: 21/45 MS: 1 ChangeBit- 00:06:19.264 [2024-07-23 10:27:07.630606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffefffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.630633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.630731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.630757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.630858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.630878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.630980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.630995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.264 #43 NEW cov: 12096 ft: 14314 corp: 28/777b lim: 45 exec/s: 43 rss: 73Mb L: 43/45 MS: 1 InsertRepeatedBytes- 00:06:19.264 [2024-07-23 10:27:07.700207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fbff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.700238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.264 [2024-07-23 10:27:07.700335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.264 [2024-07-23 10:27:07.700352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.264 #44 NEW cov: 12096 ft: 14340 corp: 29/798b lim: 45 exec/s: 44 rss: 73Mb L: 21/45 MS: 1 ChangeBit- 00:06:19.265 [2024-07-23 10:27:07.750367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f7ffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.265 [2024-07-23 10:27:07.750395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.265 [2024-07-23 10:27:07.750490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.265 [2024-07-23 10:27:07.750504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.524 #45 NEW cov: 12096 ft: 14414 corp: 30/819b lim: 45 exec/s: 45 rss: 73Mb L: 21/45 MS: 1 ChangeBit- 00:06:19.524 [2024-07-23 10:27:07.800833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ebebffff cdw11:ebeb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.524 [2024-07-23 10:27:07.800860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.524 [2024-07-23 10:27:07.800954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ebffebeb cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:19.524 [2024-07-23 10:27:07.800970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.524 #46 NEW cov: 12096 ft: 14430 corp: 31/840b lim: 45 exec/s: 23 rss: 73Mb L: 21/45 MS: 1 ChangeByte- 00:06:19.524 #46 DONE cov: 12096 ft: 14430 corp: 31/840b lim: 45 exec/s: 23 rss: 73Mb 00:06:19.524 ###### Recommended dictionary. ###### 00:06:19.524 "?\000\000\000\000\000\000\000" # Uses: 0 00:06:19.524 ###### End of recommended dictionary. ###### 00:06:19.524 Done 46 runs in 2 second(s) 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:19.524 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:19.525 10:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:19.525 [2024-07-23 10:27:07.998093] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:19.525 [2024-07-23 10:27:07.998180] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427941 ] 00:06:19.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.042 [2024-07-23 10:27:08.301562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.042 [2024-07-23 10:27:08.335097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.042 [2024-07-23 10:27:08.387792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.042 [2024-07-23 10:27:08.404126] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:20.042 INFO: Running with entropic power schedule (0xFF, 100). 00:06:20.042 INFO: Seed: 2006519240 00:06:20.042 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:20.042 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:20.042 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:20.042 INFO: A corpus is not provided, starting from an empty corpus 00:06:20.042 #2 INITED exec/s: 0 rss: 64Mb 00:06:20.042 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:20.042 This may also happen if the target rejected all inputs we tried so far 00:06:20.042 [2024-07-23 10:27:08.475167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:20.042 [2024-07-23 10:27:08.475219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.300 NEW_FUNC[1/690]: 0x49e4c0 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:20.300 NEW_FUNC[2/690]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:20.300 #3 NEW cov: 11769 ft: 11769 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:20.559 [2024-07-23 10:27:08.815486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:20.559 [2024-07-23 10:27:08.815534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.559 #9 NEW cov: 11899 ft: 12147 corp: 3/6b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:06:20.559 [2024-07-23 10:27:08.876130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f6ff cdw11:00000000 00:06:20.559 [2024-07-23 10:27:08.876163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.559 [2024-07-23 10:27:08.876247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:20.559 [2024-07-23 10:27:08.876263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.559 [2024-07-23 10:27:08.876345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:20.559 [2024-07-23 10:27:08.876360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.559 #10 NEW cov: 11905 ft: 12749 corp: 4/13b lim: 10 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 CMP- DE: "\366\377\377\377"- 00:06:20.559 [2024-07-23 10:27:08.936078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:20.559 [2024-07-23 10:27:08.936107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.559 [2024-07-23 10:27:08.936190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:20.559 [2024-07-23 10:27:08.936206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.559 #11 NEW cov: 11990 ft: 13254 corp: 5/18b lim: 10 exec/s: 0 rss: 70Mb L: 5/7 MS: 1 EraseBytes- 00:06:20.559 [2024-07-23 10:27:08.996017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:20.559 [2024-07-23 10:27:08.996044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.559 #12 NEW cov: 11990 ft: 13311 corp: 6/21b lim: 10 exec/s: 0 rss: 70Mb L: 3/7 MS: 1 ChangeBit- 00:06:20.559 [2024-07-23 10:27:09.046209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:20.559 [2024-07-23 10:27:09.046234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.098 #13 NEW cov: 11990 ft: 13391 corp: 7/24b lim: 10 exec/s: 0 rss: 71Mb L: 3/7 MS: 1 ShuffleBytes- 00:06:21.098 [2024-07-23 10:27:09.106619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a6c cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.106644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.098 [2024-07-23 10:27:09.106726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00006c6c cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.106740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.098 #14 NEW cov: 11990 ft: 13582 corp: 8/28b lim: 10 exec/s: 0 rss: 71Mb L: 4/7 MS: 1 InsertRepeatedBytes- 00:06:21.098 [2024-07-23 10:27:09.156728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.156754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.098 [2024-07-23 10:27:09.156838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.156853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.098 #15 NEW cov: 11990 ft: 13635 corp: 9/33b lim: 10 exec/s: 0 rss: 71Mb L: 5/7 MS: 1 ShuffleBytes- 00:06:21.098 [2024-07-23 10:27:09.217209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000807 cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.217234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.098 [2024-07-23 10:27:09.217319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000707 cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.217335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.098 [2024-07-23 10:27:09.217416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000070a cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.217431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.098 #16 NEW cov: 11990 ft: 13686 corp: 10/40b lim: 10 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:21.098 [2024-07-23 10:27:09.266863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.266889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.098 #17 NEW cov: 11990 ft: 13746 corp: 11/43b lim: 10 exec/s: 0 rss: 71Mb L: 3/7 MS: 1 InsertByte- 00:06:21.098 [2024-07-23 10:27:09.317593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.317619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.098 [2024-07-23 10:27:09.317703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.317719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.098 [2024-07-23 10:27:09.317800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006c6c cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.317826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.098 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:21.098 #18 NEW cov: 12013 ft: 13798 corp: 12/50b lim: 10 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:21.098 [2024-07-23 10:27:09.377450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.377475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.098 [2024-07-23 10:27:09.377553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:21.098 [2024-07-23 10:27:09.377567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.098 #19 NEW cov: 12013 ft: 13831 corp: 13/55b lim: 10 exec/s: 0 rss: 72Mb L: 5/7 MS: 1 CopyPart- 00:06:21.098 [2024-07-23 10:27:09.428115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.428141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.099 [2024-07-23 10:27:09.428227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.428243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.099 [2024-07-23 10:27:09.428322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.428337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.099 [2024-07-23 10:27:09.428416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006c6c cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.428434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.099 #20 NEW cov: 12013 ft: 14054 corp: 14/64b lim: 10 exec/s: 20 rss: 72Mb L: 9/9 MS: 1 CrossOver- 00:06:21.099 [2024-07-23 10:27:09.488105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f6ff cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.488132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.099 [2024-07-23 10:27:09.488230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001dff cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.488246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.099 [2024-07-23 10:27:09.488328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.488343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.099 #21 NEW cov: 12013 ft: 14080 corp: 15/71b lim: 10 exec/s: 21 rss: 72Mb L: 7/9 MS: 1 ChangeByte- 00:06:21.099 [2024-07-23 10:27:09.537858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003b0a cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.537884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.099 #22 NEW cov: 12013 ft: 14214 corp: 16/74b lim: 10 exec/s: 22 rss: 72Mb L: 3/9 MS: 1 InsertByte- 00:06:21.099 [2024-07-23 10:27:09.587875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:21.099 [2024-07-23 10:27:09.587900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.357 #23 NEW cov: 12013 ft: 14223 corp: 17/76b lim: 10 exec/s: 23 rss: 72Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:21.357 [2024-07-23 10:27:09.638653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000af6 cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.638679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.357 [2024-07-23 10:27:09.638763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.638781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.357 [2024-07-23 10:27:09.638879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff3b cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.638894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.357 #24 NEW cov: 12013 ft: 14236 corp: 18/83b lim: 10 exec/s: 24 rss: 72Mb L: 7/9 MS: 1 PersAutoDict- DE: "\366\377\377\377"- 00:06:21.357 [2024-07-23 10:27:09.698858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001800 cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.698886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.357 [2024-07-23 10:27:09.698964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.698979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.357 [2024-07-23 10:27:09.699054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006c6c cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.699069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.357 #25 NEW cov: 12013 ft: 14333 corp: 19/90b lim: 10 exec/s: 25 rss: 72Mb L: 7/9 MS: 1 ChangeByte- 00:06:21.357 [2024-07-23 10:27:09.748550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.748577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.357 #26 NEW cov: 12013 ft: 14348 corp: 20/92b lim: 10 exec/s: 26 rss: 72Mb L: 2/9 MS: 1 CrossOver- 00:06:21.357 [2024-07-23 10:27:09.809441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.809468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.357 [2024-07-23 10:27:09.809552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.809568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.357 [2024-07-23 10:27:09.809650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.809665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.357 [2024-07-23 10:27:09.809747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:21.357 [2024-07-23 10:27:09.809763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.357 #27 NEW cov: 12013 ft: 14385 corp: 21/101b lim: 10 exec/s: 27 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:21.616 [2024-07-23 10:27:09.869209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000827 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:09.869236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.616 [2024-07-23 10:27:09.869324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:09.869338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.616 #28 NEW cov: 12013 ft: 14400 corp: 22/105b lim: 10 exec/s: 28 rss: 72Mb L: 4/9 MS: 1 InsertByte- 00:06:21.616 [2024-07-23 10:27:09.929090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000827 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:09.929117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.616 #29 NEW cov: 12013 ft: 14423 corp: 23/108b lim: 10 exec/s: 29 rss: 72Mb L: 3/9 MS: 1 EraseBytes- 00:06:21.616 [2024-07-23 10:27:09.989795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:09.989822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.616 [2024-07-23 10:27:09.989911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000270a cdw11:00000000 00:06:21.616 [2024-07-23 10:27:09.989926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.616 [2024-07-23 10:27:09.990014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000270a cdw11:00000000 00:06:21.616 [2024-07-23 10:27:09.990030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.616 #30 NEW cov: 12013 ft: 14438 corp: 24/115b lim: 10 exec/s: 30 rss: 72Mb L: 7/9 MS: 1 CopyPart- 00:06:21.616 [2024-07-23 10:27:10.039695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000827 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:10.039728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.616 [2024-07-23 10:27:10.039819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001a40 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:10.039835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.616 #31 NEW cov: 12013 ft: 14445 corp: 25/119b lim: 10 exec/s: 31 rss: 72Mb L: 4/9 MS: 1 ChangeBit- 00:06:21.616 [2024-07-23 10:27:10.090447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001800 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:10.090479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.616 [2024-07-23 10:27:10.090560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.616 [2024-07-23 10:27:10.090574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.616 [2024-07-23 10:27:10.090655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006c6c cdw11:00000000 00:06:21.616 [2024-07-23 10:27:10.090669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.616 [2024-07-23 10:27:10.090757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000246c cdw11:00000000 00:06:21.616 [2024-07-23 10:27:10.090772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.875 #32 NEW cov: 12013 ft: 14453 corp: 26/127b lim: 10 exec/s: 32 rss: 72Mb L: 8/9 MS: 1 InsertByte- 00:06:21.875 [2024-07-23 10:27:10.149802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:21.875 [2024-07-23 10:27:10.149828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.876 #33 NEW cov: 12013 ft: 14464 corp: 27/130b lim: 10 exec/s: 33 rss: 72Mb L: 3/9 MS: 1 EraseBytes- 00:06:21.876 [2024-07-23 10:27:10.199950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000400a cdw11:00000000 00:06:21.876 [2024-07-23 10:27:10.199976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.876 #34 NEW cov: 12013 ft: 14470 corp: 28/133b lim: 10 exec/s: 34 rss: 72Mb L: 3/9 MS: 1 CopyPart- 00:06:21.876 [2024-07-23 10:27:10.250464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001800 cdw11:00000000 00:06:21.876 [2024-07-23 10:27:10.250490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.876 [2024-07-23 10:27:10.250572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.876 [2024-07-23 10:27:10.250587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.876 #35 NEW cov: 12013 ft: 14487 corp: 29/137b lim: 10 exec/s: 35 rss: 73Mb L: 4/9 MS: 1 EraseBytes- 00:06:21.876 [2024-07-23 10:27:10.310654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000807 cdw11:00000000 00:06:21.876 [2024-07-23 10:27:10.310680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.876 [2024-07-23 10:27:10.310775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000707 cdw11:00000000 00:06:21.876 [2024-07-23 10:27:10.310795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.876 #36 NEW cov: 12013 ft: 14492 corp: 30/141b lim: 10 exec/s: 36 rss: 73Mb L: 4/9 MS: 1 EraseBytes- 00:06:21.876 [2024-07-23 10:27:10.370581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:21.876 [2024-07-23 10:27:10.370608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.136 #37 NEW cov: 12013 ft: 14517 corp: 31/143b lim: 10 exec/s: 37 rss: 73Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:22.136 [2024-07-23 10:27:10.421368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:22.136 [2024-07-23 10:27:10.421393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.136 [2024-07-23 10:27:10.421498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a6c cdw11:00000000 00:06:22.136 [2024-07-23 10:27:10.421514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.136 [2024-07-23 10:27:10.421601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000406c cdw11:00000000 00:06:22.136 [2024-07-23 10:27:10.421616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.136 #38 NEW cov: 12013 ft: 14522 corp: 32/149b lim: 10 exec/s: 38 rss: 73Mb L: 6/9 MS: 1 CrossOver- 00:06:22.136 [2024-07-23 10:27:10.471439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:22.136 [2024-07-23 10:27:10.471464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.136 [2024-07-23 10:27:10.471547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:22.136 [2024-07-23 10:27:10.471562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.136 [2024-07-23 10:27:10.471654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:22.136 [2024-07-23 10:27:10.471669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.136 #39 NEW cov: 12013 ft: 14529 corp: 33/155b lim: 10 exec/s: 19 rss: 73Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:06:22.136 #39 DONE cov: 12013 ft: 14529 corp: 33/155b lim: 10 exec/s: 19 rss: 73Mb 00:06:22.136 ###### Recommended dictionary. ###### 00:06:22.136 "\366\377\377\377" # Uses: 1 00:06:22.136 "\000\000\000\000" # Uses: 0 00:06:22.136 ###### End of recommended dictionary. ###### 00:06:22.136 Done 39 runs in 2 second(s) 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:22.136 10:27:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:22.399 [2024-07-23 10:27:10.655435] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:22.399 [2024-07-23 10:27:10.655506] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428313 ] 00:06:22.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.659 [2024-07-23 10:27:10.961341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.659 [2024-07-23 10:27:10.991633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.659 [2024-07-23 10:27:11.044350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.659 [2024-07-23 10:27:11.060692] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:22.659 INFO: Running with entropic power schedule (0xFF, 100). 00:06:22.659 INFO: Seed: 367545069 00:06:22.659 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:22.659 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:22.659 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:22.659 INFO: A corpus is not provided, starting from an empty corpus 00:06:22.659 #2 INITED exec/s: 0 rss: 64Mb 00:06:22.659 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:22.659 This may also happen if the target rejected all inputs we tried so far 00:06:22.659 [2024-07-23 10:27:11.109325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:22.659 [2024-07-23 10:27:11.109357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.918 NEW_FUNC[1/690]: 0x49eeb0 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:22.918 NEW_FUNC[2/690]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:22.918 #3 NEW cov: 11769 ft: 11770 corp: 2/4b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CMP- DE: "\002\000"- 00:06:23.178 [2024-07-23 10:27:11.430326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.430373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.430434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.430453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.430511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.430529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.178 #4 NEW cov: 11899 ft: 12646 corp: 3/10b lim: 10 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:23.178 [2024-07-23 10:27:11.480566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.480594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.480646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.480661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.480711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.480726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.480782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.480796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.480845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.480859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.178 #5 NEW cov: 11905 ft: 13061 corp: 4/20b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:23.178 [2024-07-23 10:27:11.530333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.530361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.530413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.530427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.178 #6 NEW cov: 11990 ft: 13475 corp: 5/25b lim: 10 exec/s: 0 rss: 70Mb L: 5/10 MS: 1 CrossOver- 00:06:23.178 [2024-07-23 10:27:11.570810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.570835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.570887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006ccd cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.570901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.570952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.570965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.571015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.571028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.571081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.571094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.178 #7 NEW cov: 11990 ft: 13671 corp: 6/35b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:06:23.178 [2024-07-23 10:27:11.620919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.620947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.621001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.621014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.621064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.621077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.621127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.621140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.621189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.621202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.178 #8 NEW cov: 11990 ft: 13727 corp: 7/45b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:23.178 [2024-07-23 10:27:11.670741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.670767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.178 [2024-07-23 10:27:11.670824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.178 [2024-07-23 10:27:11.670838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.438 #9 NEW cov: 11990 ft: 13785 corp: 8/50b lim: 10 exec/s: 0 rss: 71Mb L: 5/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:23.438 [2024-07-23 10:27:11.711010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.711036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.711088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.711101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.711168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.711181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.438 #10 NEW cov: 11990 ft: 13839 corp: 9/57b lim: 10 exec/s: 0 rss: 71Mb L: 7/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:23.438 [2024-07-23 10:27:11.760911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.760937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.438 #11 NEW cov: 11990 ft: 13877 corp: 10/60b lim: 10 exec/s: 0 rss: 71Mb L: 3/10 MS: 1 CrossOver- 00:06:23.438 [2024-07-23 10:27:11.801468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.801492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.801546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.801560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.801612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.801626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.801676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fffb cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.801689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.801740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.801753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.438 #12 NEW cov: 11990 ft: 13935 corp: 11/70b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:23.438 [2024-07-23 10:27:11.851592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.851617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.851670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.851684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.851736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.851749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.851806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000200 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.851819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.851869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.851882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.438 #13 NEW cov: 11990 ft: 13947 corp: 12/80b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:23.438 [2024-07-23 10:27:11.901630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.901655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.901705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.901718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.901769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.901788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.438 [2024-07-23 10:27:11.901837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000200 cdw11:00000000 00:06:23.438 [2024-07-23 10:27:11.901850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.438 #14 NEW cov: 11990 ft: 13960 corp: 13/89b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 EraseBytes- 00:06:23.698 [2024-07-23 10:27:11.951552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:23.698 [2024-07-23 10:27:11.951577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.698 [2024-07-23 10:27:11.951630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.698 [2024-07-23 10:27:11.951644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.698 #15 NEW cov: 11990 ft: 13975 corp: 14/94b lim: 10 exec/s: 0 rss: 72Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:23.698 [2024-07-23 10:27:12.001895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.001921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.698 [2024-07-23 10:27:12.001974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.001987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.698 [2024-07-23 10:27:12.002052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.002071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.698 [2024-07-23 10:27:12.002123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.002137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.698 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:23.698 #16 NEW cov: 12013 ft: 14003 corp: 15/103b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:23.698 [2024-07-23 10:27:12.042111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.042137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.698 [2024-07-23 10:27:12.042186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.042200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.698 [2024-07-23 10:27:12.042249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.042262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.698 [2024-07-23 10:27:12.042311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000200 cdw11:00000000 00:06:23.698 [2024-07-23 10:27:12.042324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.042376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.042389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.699 #17 NEW cov: 12013 ft: 14042 corp: 16/113b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:23.699 [2024-07-23 10:27:12.082227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.082256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.082309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006ccd cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.082322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.082375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.082389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.082438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.082452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.082501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000200a cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.082515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.699 #18 NEW cov: 12013 ft: 14062 corp: 17/123b lim: 10 exec/s: 18 rss: 72Mb L: 10/10 MS: 1 ChangeBit- 00:06:23.699 [2024-07-23 10:27:12.122362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.122387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.122439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.122453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.122503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.122516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.122566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.122579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.122629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.122642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.699 #19 NEW cov: 12013 ft: 14146 corp: 18/133b lim: 10 exec/s: 19 rss: 72Mb L: 10/10 MS: 1 ChangeBit- 00:06:23.699 [2024-07-23 10:27:12.162328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.162353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.162410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fff7 cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.162423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.162476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.162489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.699 [2024-07-23 10:27:12.162544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:23.699 [2024-07-23 10:27:12.162558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.699 #20 NEW cov: 12013 ft: 14161 corp: 19/142b lim: 10 exec/s: 20 rss: 72Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:23.959 [2024-07-23 10:27:12.202345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002ff cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.202371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.202424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.202438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.202491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.202505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.959 #21 NEW cov: 12013 ft: 14194 corp: 20/149b lim: 10 exec/s: 21 rss: 72Mb L: 7/10 MS: 1 InsertByte- 00:06:23.959 [2024-07-23 10:27:12.242673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.242698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.242747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.242761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.242813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.242826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.242876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.242889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.242938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.242951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.959 #22 NEW cov: 12013 ft: 14219 corp: 21/159b lim: 10 exec/s: 22 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:23.959 [2024-07-23 10:27:12.292725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e702 cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.292750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.292805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.292819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.292883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.292897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.959 [2024-07-23 10:27:12.292951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.959 [2024-07-23 10:27:12.292967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.960 #23 NEW cov: 12013 ft: 14233 corp: 22/167b lim: 10 exec/s: 23 rss: 72Mb L: 8/10 MS: 1 InsertByte- 00:06:23.960 [2024-07-23 10:27:12.342992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.343017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.960 [2024-07-23 10:27:12.343070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.343083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.960 [2024-07-23 10:27:12.343134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.343147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.960 [2024-07-23 10:27:12.343197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.343210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.960 [2024-07-23 10:27:12.343262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.343275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.960 #24 NEW cov: 12013 ft: 14247 corp: 23/177b lim: 10 exec/s: 24 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:06:23.960 [2024-07-23 10:27:12.392868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.392893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.960 [2024-07-23 10:27:12.392945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006cff cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.392958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.960 [2024-07-23 10:27:12.393010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.393024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.960 #25 NEW cov: 12013 ft: 14294 corp: 24/183b lim: 10 exec/s: 25 rss: 72Mb L: 6/10 MS: 1 EraseBytes- 00:06:23.960 [2024-07-23 10:27:12.432905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.432932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.960 [2024-07-23 10:27:12.432984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:23.960 [2024-07-23 10:27:12.432998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.960 #26 NEW cov: 12013 ft: 14306 corp: 25/188b lim: 10 exec/s: 26 rss: 72Mb L: 5/10 MS: 1 CrossOver- 00:06:24.220 [2024-07-23 10:27:12.473397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd0a cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.473424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.473475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.473493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.473546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.473561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.473611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.473625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.473675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.473688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.220 #27 NEW cov: 12013 ft: 14327 corp: 26/198b lim: 10 exec/s: 27 rss: 72Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:24.220 [2024-07-23 10:27:12.513398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.513425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.513493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.513509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.513561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.513574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.513624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.513638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.220 #28 NEW cov: 12013 ft: 14339 corp: 27/207b lim: 10 exec/s: 28 rss: 72Mb L: 9/10 MS: 1 CrossOver- 00:06:24.220 [2024-07-23 10:27:12.553488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.553515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.553571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.553586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.220 [2024-07-23 10:27:12.553638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:24.220 [2024-07-23 10:27:12.553653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.221 [2024-07-23 10:27:12.553706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.553721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.221 #29 NEW cov: 12013 ft: 14351 corp: 28/216b lim: 10 exec/s: 29 rss: 72Mb L: 9/10 MS: 1 EraseBytes- 00:06:24.221 [2024-07-23 10:27:12.593342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.593371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.221 [2024-07-23 10:27:12.593438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.593453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.221 #30 NEW cov: 12013 ft: 14357 corp: 29/221b lim: 10 exec/s: 30 rss: 72Mb L: 5/10 MS: 1 EraseBytes- 00:06:24.221 [2024-07-23 10:27:12.633497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.633523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.221 [2024-07-23 10:27:12.633574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.633588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.221 #31 NEW cov: 12013 ft: 14370 corp: 30/226b lim: 10 exec/s: 31 rss: 72Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:24.221 [2024-07-23 10:27:12.683725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.683750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.221 [2024-07-23 10:27:12.683806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.683820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.221 [2024-07-23 10:27:12.683872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:24.221 [2024-07-23 10:27:12.683885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.221 #32 NEW cov: 12013 ft: 14388 corp: 31/232b lim: 10 exec/s: 32 rss: 73Mb L: 6/10 MS: 1 EraseBytes- 00:06:24.481 [2024-07-23 10:27:12.724127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.724153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.724203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.724217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.724265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.724279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.724328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.724342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.724390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.724404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.481 #33 NEW cov: 12013 ft: 14392 corp: 32/242b lim: 10 exec/s: 33 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:24.481 [2024-07-23 10:27:12.764161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cdcd cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.764189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.764239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006c02 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.764253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.764302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000cdff cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.764316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.764365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.764378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.764426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.764439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.481 #34 NEW cov: 12013 ft: 14411 corp: 33/252b lim: 10 exec/s: 34 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:24.481 [2024-07-23 10:27:12.803790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.803815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.481 #36 NEW cov: 12013 ft: 14488 corp: 34/255b lim: 10 exec/s: 36 rss: 73Mb L: 3/10 MS: 2 ShuffleBytes-PersAutoDict- DE: "\002\000"- 00:06:24.481 [2024-07-23 10:27:12.844152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.844177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.844226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000cd00 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.844240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.844288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.844301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.481 #37 NEW cov: 12013 ft: 14533 corp: 35/262b lim: 10 exec/s: 37 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:06:24.481 [2024-07-23 10:27:12.894407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cd02 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.894432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.894480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000cd cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.894493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.894541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.894555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.894602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.894618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.481 #38 NEW cov: 12013 ft: 14536 corp: 36/271b lim: 10 exec/s: 38 rss: 73Mb L: 9/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:24.481 [2024-07-23 10:27:12.944687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d502 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.944711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.944760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.944774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.944827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.944841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.944891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.944903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.481 [2024-07-23 10:27:12.944953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:24.481 [2024-07-23 10:27:12.944966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.481 #39 NEW cov: 12013 ft: 14557 corp: 37/281b lim: 10 exec/s: 39 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:06:24.742 [2024-07-23 10:27:12.994768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:24.742 [2024-07-23 10:27:12.994797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.742 [2024-07-23 10:27:12.994863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 00:06:24.742 [2024-07-23 10:27:12.994878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.742 [2024-07-23 10:27:12.994941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.743 [2024-07-23 10:27:12.994955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.743 [2024-07-23 10:27:12.995006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001f0a cdw11:00000000 00:06:24.743 [2024-07-23 10:27:12.995020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.743 #40 NEW cov: 12013 ft: 14566 corp: 38/289b lim: 10 exec/s: 40 rss: 73Mb L: 8/10 MS: 1 InsertByte- 00:06:24.743 [2024-07-23 10:27:13.034853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000002ff cdw11:00000000 00:06:24.743 [2024-07-23 10:27:13.034880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.743 [2024-07-23 10:27:13.034929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008000 cdw11:00000000 00:06:24.743 [2024-07-23 10:27:13.034943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.743 [2024-07-23 10:27:13.034992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:24.743 [2024-07-23 10:27:13.035005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.743 [2024-07-23 10:27:13.035055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:24.743 [2024-07-23 10:27:13.035068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.743 #41 NEW cov: 12013 ft: 14606 corp: 39/298b lim: 10 exec/s: 41 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:06:24.743 [2024-07-23 10:27:13.074813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:24.743 [2024-07-23 10:27:13.074839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.743 [2024-07-23 10:27:13.074891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 00:06:24.743 [2024-07-23 10:27:13.074905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.743 [2024-07-23 10:27:13.074971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000000 00:06:24.743 [2024-07-23 10:27:13.074994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.743 #42 NEW cov: 12013 ft: 14610 corp: 40/305b lim: 10 exec/s: 21 rss: 73Mb L: 7/10 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:24.743 #42 DONE cov: 12013 ft: 14610 corp: 40/305b lim: 10 exec/s: 21 rss: 73Mb 00:06:24.743 ###### Recommended dictionary. ###### 00:06:24.743 "\002\000" # Uses: 9 00:06:24.743 ###### End of recommended dictionary. ###### 00:06:24.743 Done 42 runs in 2 second(s) 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:24.743 10:27:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:25.006 [2024-07-23 10:27:13.267299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:25.006 [2024-07-23 10:27:13.267378] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428681 ] 00:06:25.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.267 [2024-07-23 10:27:13.572543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.267 [2024-07-23 10:27:13.604806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.267 [2024-07-23 10:27:13.657459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.267 [2024-07-23 10:27:13.673765] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:25.267 INFO: Running with entropic power schedule (0xFF, 100). 00:06:25.267 INFO: Seed: 2982535679 00:06:25.267 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:25.267 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:25.267 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:25.267 INFO: A corpus is not provided, starting from an empty corpus 00:06:25.267 [2024-07-23 10:27:13.718433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.267 [2024-07-23 10:27:13.718472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.267 #2 INITED cov: 11797 ft: 11798 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:25.526 [2024-07-23 10:27:13.768423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.526 [2024-07-23 10:27:13.768457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.526 #3 NEW cov: 11927 ft: 12427 corp: 2/2b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ChangeBit- 00:06:25.526 [2024-07-23 10:27:13.848611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.526 [2024-07-23 10:27:13.848643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.526 #4 NEW cov: 11933 ft: 12636 corp: 3/3b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:25.526 [2024-07-23 10:27:13.898806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.526 [2024-07-23 10:27:13.898838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.526 [2024-07-23 10:27:13.898887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.526 [2024-07-23 10:27:13.898904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.526 #5 NEW cov: 12018 ft: 13642 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:25.526 [2024-07-23 10:27:13.978957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.526 [2024-07-23 10:27:13.978989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.526 [2024-07-23 10:27:13.979038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.526 [2024-07-23 10:27:13.979054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.526 #6 NEW cov: 12018 ft: 13692 corp: 5/7b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:25.786 [2024-07-23 10:27:14.029237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.029284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.786 [2024-07-23 10:27:14.029319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.029336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.786 [2024-07-23 10:27:14.029367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.029383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.786 #7 NEW cov: 12018 ft: 13928 corp: 6/10b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CopyPart- 00:06:25.786 [2024-07-23 10:27:14.109415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.109447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.786 [2024-07-23 10:27:14.109496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.109512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.786 [2024-07-23 10:27:14.109542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.109558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.786 [2024-07-23 10:27:14.109588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.109603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.786 #8 NEW cov: 12018 ft: 14243 corp: 7/14b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:06:25.786 [2024-07-23 10:27:14.189541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.189573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.786 [2024-07-23 10:27:14.189622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.189638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.786 #9 NEW cov: 12018 ft: 14312 corp: 8/16b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 InsertByte- 00:06:25.786 [2024-07-23 10:27:14.269714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.269745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.786 [2024-07-23 10:27:14.269799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.786 [2024-07-23 10:27:14.269816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.046 #10 NEW cov: 12018 ft: 14340 corp: 9/18b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 InsertByte- 00:06:26.046 [2024-07-23 10:27:14.319826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.046 [2024-07-23 10:27:14.319858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.046 #11 NEW cov: 12018 ft: 14358 corp: 10/19b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 CopyPart- 00:06:26.046 [2024-07-23 10:27:14.369982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.046 [2024-07-23 10:27:14.370013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.046 [2024-07-23 10:27:14.370061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.046 [2024-07-23 10:27:14.370077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.046 #12 NEW cov: 12018 ft: 14386 corp: 11/21b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:26.046 [2024-07-23 10:27:14.430129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.047 [2024-07-23 10:27:14.430159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.047 [2024-07-23 10:27:14.430207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.047 [2024-07-23 10:27:14.430224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.047 #13 NEW cov: 12018 ft: 14411 corp: 12/23b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 EraseBytes- 00:06:26.047 [2024-07-23 10:27:14.510334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.047 [2024-07-23 10:27:14.510363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.047 [2024-07-23 10:27:14.510412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.047 [2024-07-23 10:27:14.510428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.370 #14 NEW cov: 12018 ft: 14440 corp: 13/25b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:06:26.370 [2024-07-23 10:27:14.590726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.370 [2024-07-23 10:27:14.590760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.370 [2024-07-23 10:27:14.590829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.370 [2024-07-23 10:27:14.590862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.370 [2024-07-23 10:27:14.590892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.370 [2024-07-23 10:27:14.590918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.370 [2024-07-23 10:27:14.590948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.370 [2024-07-23 10:27:14.590968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.370 [2024-07-23 10:27:14.590998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.370 [2024-07-23 10:27:14.591013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.658 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:26.658 #15 NEW cov: 12041 ft: 14596 corp: 14/30b lim: 5 exec/s: 15 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:06:26.658 [2024-07-23 10:27:14.953891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:14.953946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.658 #16 NEW cov: 12041 ft: 14691 corp: 15/31b lim: 5 exec/s: 16 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:06:26.658 [2024-07-23 10:27:15.023805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.023833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.658 #17 NEW cov: 12041 ft: 14726 corp: 16/32b lim: 5 exec/s: 17 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:26.658 [2024-07-23 10:27:15.075276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.075303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.658 [2024-07-23 10:27:15.075399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.075414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.658 [2024-07-23 10:27:15.075505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.075520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.658 [2024-07-23 10:27:15.075610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.075624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.658 [2024-07-23 10:27:15.075718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.075734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.658 #18 NEW cov: 12041 ft: 14806 corp: 17/37b lim: 5 exec/s: 18 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:26.658 [2024-07-23 10:27:15.144795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.144822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.658 [2024-07-23 10:27:15.144911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.144930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.658 [2024-07-23 10:27:15.145023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.658 [2024-07-23 10:27:15.145036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.923 #19 NEW cov: 12041 ft: 14862 corp: 18/40b lim: 5 exec/s: 19 rss: 72Mb L: 3/5 MS: 1 ChangeByte- 00:06:26.923 [2024-07-23 10:27:15.204289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.923 [2024-07-23 10:27:15.204316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.923 #20 NEW cov: 12041 ft: 14864 corp: 19/41b lim: 5 exec/s: 20 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:26.923 [2024-07-23 10:27:15.264632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.923 [2024-07-23 10:27:15.264660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.923 #21 NEW cov: 12041 ft: 14897 corp: 20/42b lim: 5 exec/s: 21 rss: 72Mb L: 1/5 MS: 1 CopyPart- 00:06:26.923 [2024-07-23 10:27:15.325132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.923 [2024-07-23 10:27:15.325159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.923 [2024-07-23 10:27:15.325251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.923 [2024-07-23 10:27:15.325268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.923 #22 NEW cov: 12041 ft: 14907 corp: 21/44b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:26.923 [2024-07-23 10:27:15.384967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.923 [2024-07-23 10:27:15.384995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.923 #23 NEW cov: 12041 ft: 14919 corp: 22/45b lim: 5 exec/s: 23 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:06:27.183 [2024-07-23 10:27:15.436585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.436612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.436705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.436721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.436812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.436829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.436920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.436937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.437031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.437047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.183 #24 NEW cov: 12041 ft: 14925 corp: 23/50b lim: 5 exec/s: 24 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:06:27.183 [2024-07-23 10:27:15.495950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.495975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.496066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.496081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.183 #25 NEW cov: 12041 ft: 14933 corp: 24/52b lim: 5 exec/s: 25 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:27.183 [2024-07-23 10:27:15.545815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.545839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.545935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.545951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.183 #26 NEW cov: 12041 ft: 15009 corp: 25/54b lim: 5 exec/s: 26 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:27.183 [2024-07-23 10:27:15.595951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.595976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.596066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.596081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.183 #27 NEW cov: 12041 ft: 15017 corp: 26/56b lim: 5 exec/s: 27 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:27.183 [2024-07-23 10:27:15.646147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.646173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.183 [2024-07-23 10:27:15.646274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.183 [2024-07-23 10:27:15.646291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.183 #28 NEW cov: 12041 ft: 15082 corp: 27/58b lim: 5 exec/s: 28 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:06:27.443 [2024-07-23 10:27:15.706008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.443 [2024-07-23 10:27:15.706034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.443 #29 NEW cov: 12041 ft: 15089 corp: 28/59b lim: 5 exec/s: 14 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:06:27.443 #29 DONE cov: 12041 ft: 15089 corp: 28/59b lim: 5 exec/s: 14 rss: 73Mb 00:06:27.443 Done 29 runs in 2 second(s) 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:27.443 10:27:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:27.443 [2024-07-23 10:27:15.890110] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:27.443 [2024-07-23 10:27:15.890179] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429056 ] 00:06:27.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.702 [2024-07-23 10:27:16.199100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.962 [2024-07-23 10:27:16.230023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.962 [2024-07-23 10:27:16.282688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.962 [2024-07-23 10:27:16.298997] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:27.962 INFO: Running with entropic power schedule (0xFF, 100). 00:06:27.962 INFO: Seed: 1310608030 00:06:27.962 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:27.962 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:27.962 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:27.962 INFO: A corpus is not provided, starting from an empty corpus 00:06:27.962 [2024-07-23 10:27:16.347202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.962 [2024-07-23 10:27:16.347232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.962 #2 INITED cov: 11788 ft: 11797 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:27.962 [2024-07-23 10:27:16.387214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.962 [2024-07-23 10:27:16.387242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.222 NEW_FUNC[1/1]: 0x12fed30 in nvmf_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/nvmf.c:150 00:06:28.222 #3 NEW cov: 11927 ft: 12530 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBinInt- 00:06:28.222 [2024-07-23 10:27:16.708216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.222 [2024-07-23 10:27:16.708267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.481 #4 NEW cov: 11933 ft: 12724 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBinInt- 00:06:28.481 [2024-07-23 10:27:16.758376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.758405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.481 [2024-07-23 10:27:16.758462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.758476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.481 #5 NEW cov: 12018 ft: 13612 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:28.481 [2024-07-23 10:27:16.808303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.808329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.481 #6 NEW cov: 12018 ft: 13793 corp: 5/6b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBinInt- 00:06:28.481 [2024-07-23 10:27:16.848577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.848604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.481 [2024-07-23 10:27:16.848658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.848672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.481 #7 NEW cov: 12018 ft: 13860 corp: 6/8b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:06:28.481 [2024-07-23 10:27:16.888485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.888511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.481 #8 NEW cov: 12018 ft: 13923 corp: 7/9b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:28.481 [2024-07-23 10:27:16.939138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.939164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.481 [2024-07-23 10:27:16.939220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.939234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.481 [2024-07-23 10:27:16.939290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.939303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.481 [2024-07-23 10:27:16.939358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.481 [2024-07-23 10:27:16.939371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.481 #9 NEW cov: 12018 ft: 14274 corp: 8/13b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:06:28.741 [2024-07-23 10:27:16.988816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:16.988841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.741 #10 NEW cov: 12018 ft: 14432 corp: 9/14b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 ChangeBit- 00:06:28.741 [2024-07-23 10:27:17.029108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.029133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.741 [2024-07-23 10:27:17.029185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.029199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.741 #11 NEW cov: 12018 ft: 14484 corp: 10/16b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:28.741 [2024-07-23 10:27:17.069071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.069096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.741 #12 NEW cov: 12018 ft: 14520 corp: 11/17b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:28.741 [2024-07-23 10:27:17.119794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.119820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.741 [2024-07-23 10:27:17.119876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.119890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.741 [2024-07-23 10:27:17.119943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.119956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.741 [2024-07-23 10:27:17.120009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.120022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.741 [2024-07-23 10:27:17.120076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.120092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.741 #13 NEW cov: 12018 ft: 14630 corp: 12/22b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:28.741 [2024-07-23 10:27:17.159309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.159335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.741 #14 NEW cov: 12018 ft: 14662 corp: 13/23b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:28.741 [2024-07-23 10:27:17.199399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.199424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.741 #15 NEW cov: 12018 ft: 14680 corp: 14/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:28.741 [2024-07-23 10:27:17.239685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.239710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.741 [2024-07-23 10:27:17.239764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.741 [2024-07-23 10:27:17.239783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.001 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:29.001 #16 NEW cov: 12041 ft: 14736 corp: 15/26b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:29.001 [2024-07-23 10:27:17.280097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.280122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.001 [2024-07-23 10:27:17.280193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.280207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.001 [2024-07-23 10:27:17.280261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.280274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.001 [2024-07-23 10:27:17.280329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.280343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.001 #17 NEW cov: 12041 ft: 14745 corp: 16/30b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeByte- 00:06:29.001 [2024-07-23 10:27:17.329753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.329783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.001 #18 NEW cov: 12041 ft: 14793 corp: 17/31b lim: 5 exec/s: 18 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:29.001 [2024-07-23 10:27:17.380067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.380093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.001 [2024-07-23 10:27:17.380151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.380164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.001 #19 NEW cov: 12041 ft: 14821 corp: 18/33b lim: 5 exec/s: 19 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:06:29.001 [2024-07-23 10:27:17.430034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.430060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.001 #20 NEW cov: 12041 ft: 14834 corp: 19/34b lim: 5 exec/s: 20 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:29.001 [2024-07-23 10:27:17.480205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.001 [2024-07-23 10:27:17.480230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.001 #21 NEW cov: 12041 ft: 14893 corp: 20/35b lim: 5 exec/s: 21 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:29.260 [2024-07-23 10:27:17.520962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.260 [2024-07-23 10:27:17.520988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.521045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.521059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.521128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.521142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.521197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.521210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.521267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.521281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.261 #22 NEW cov: 12041 ft: 14909 corp: 21/40b lim: 5 exec/s: 22 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:06:29.261 [2024-07-23 10:27:17.570444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.570470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.261 #23 NEW cov: 12041 ft: 14922 corp: 22/41b lim: 5 exec/s: 23 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:29.261 [2024-07-23 10:27:17.621086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.621114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.621172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.621185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.621240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.621254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.621308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.621322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.261 #24 NEW cov: 12041 ft: 14934 corp: 23/45b lim: 5 exec/s: 24 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:29.261 [2024-07-23 10:27:17.660728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.660755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.261 #25 NEW cov: 12041 ft: 14956 corp: 24/46b lim: 5 exec/s: 25 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:29.261 [2024-07-23 10:27:17.701335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.701361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.701416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.701430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.701487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.701500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.701556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.701570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.261 #26 NEW cov: 12041 ft: 14965 corp: 25/50b lim: 5 exec/s: 26 rss: 72Mb L: 4/5 MS: 1 CrossOver- 00:06:29.261 [2024-07-23 10:27:17.741082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.741107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.261 [2024-07-23 10:27:17.741162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.261 [2024-07-23 10:27:17.741176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.521 #27 NEW cov: 12041 ft: 14973 corp: 26/52b lim: 5 exec/s: 27 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:29.521 [2024-07-23 10:27:17.791491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.791517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.791573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.791587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.791655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.791669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.791725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.791739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.521 #28 NEW cov: 12041 ft: 15005 corp: 27/56b lim: 5 exec/s: 28 rss: 72Mb L: 4/5 MS: 1 CMP- DE: "\001\000"- 00:06:29.521 [2024-07-23 10:27:17.831475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.831500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.831555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.831568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.831623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.831636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.521 #29 NEW cov: 12041 ft: 15191 corp: 28/59b lim: 5 exec/s: 29 rss: 72Mb L: 3/5 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:29.521 [2024-07-23 10:27:17.871740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.871768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.871828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.871842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.871896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.871910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.871963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.871976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.521 #30 NEW cov: 12041 ft: 15206 corp: 29/63b lim: 5 exec/s: 30 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:06:29.521 [2024-07-23 10:27:17.921404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.921431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.521 #31 NEW cov: 12041 ft: 15228 corp: 30/64b lim: 5 exec/s: 31 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:29.521 [2024-07-23 10:27:17.962004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.962030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.962100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.962113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.962169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.962182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.521 [2024-07-23 10:27:17.962238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:17.962251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.521 #32 NEW cov: 12041 ft: 15237 corp: 31/68b lim: 5 exec/s: 32 rss: 72Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:29.521 [2024-07-23 10:27:18.011690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.521 [2024-07-23 10:27:18.011716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.782 #33 NEW cov: 12041 ft: 15268 corp: 32/69b lim: 5 exec/s: 33 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:29.782 [2024-07-23 10:27:18.051968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.051994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.782 [2024-07-23 10:27:18.052050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.052064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.782 #34 NEW cov: 12041 ft: 15273 corp: 33/71b lim: 5 exec/s: 34 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:29.782 [2024-07-23 10:27:18.101941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.101967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.782 #35 NEW cov: 12041 ft: 15285 corp: 34/72b lim: 5 exec/s: 35 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:06:29.782 [2024-07-23 10:27:18.142195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.142221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.782 [2024-07-23 10:27:18.142279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.142293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.782 #36 NEW cov: 12041 ft: 15312 corp: 35/74b lim: 5 exec/s: 36 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:29.782 [2024-07-23 10:27:18.192152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.192178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.782 #37 NEW cov: 12041 ft: 15315 corp: 36/75b lim: 5 exec/s: 37 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:29.782 [2024-07-23 10:27:18.242487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.242514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.782 [2024-07-23 10:27:18.242569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.782 [2024-07-23 10:27:18.242583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.782 #38 NEW cov: 12041 ft: 15342 corp: 37/77b lim: 5 exec/s: 38 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:30.043 [2024-07-23 10:27:18.292482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.043 [2024-07-23 10:27:18.292506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.043 #39 NEW cov: 12041 ft: 15355 corp: 38/78b lim: 5 exec/s: 39 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:30.043 [2024-07-23 10:27:18.342908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.043 [2024-07-23 10:27:18.342933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.043 [2024-07-23 10:27:18.342990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.043 [2024-07-23 10:27:18.343003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.043 [2024-07-23 10:27:18.343075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.043 [2024-07-23 10:27:18.343089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.043 #40 NEW cov: 12041 ft: 15382 corp: 39/81b lim: 5 exec/s: 20 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:30.043 #40 DONE cov: 12041 ft: 15382 corp: 39/81b lim: 5 exec/s: 20 rss: 73Mb 00:06:30.043 ###### Recommended dictionary. ###### 00:06:30.043 "\001\000" # Uses: 1 00:06:30.043 ###### End of recommended dictionary. ###### 00:06:30.043 Done 40 runs in 2 second(s) 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:30.043 10:27:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:30.302 [2024-07-23 10:27:18.549271] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:30.302 [2024-07-23 10:27:18.549366] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429424 ] 00:06:30.302 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.563 [2024-07-23 10:27:18.861111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.563 [2024-07-23 10:27:18.894398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.563 [2024-07-23 10:27:18.947027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.563 [2024-07-23 10:27:18.963348] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:30.563 INFO: Running with entropic power schedule (0xFF, 100). 00:06:30.563 INFO: Seed: 3975565389 00:06:30.563 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:30.563 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:30.563 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:30.563 INFO: A corpus is not provided, starting from an empty corpus 00:06:30.563 #2 INITED exec/s: 0 rss: 64Mb 00:06:30.563 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:30.563 This may also happen if the target rejected all inputs we tried so far 00:06:30.563 [2024-07-23 10:27:19.012395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.563 [2024-07-23 10:27:19.012424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.563 [2024-07-23 10:27:19.012482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.563 [2024-07-23 10:27:19.012497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.563 [2024-07-23 10:27:19.012554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.563 [2024-07-23 10:27:19.012571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.132 NEW_FUNC[1/691]: 0x4a0820 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:31.132 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:31.132 #9 NEW cov: 11819 ft: 11816 corp: 2/25b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:31.132 [2024-07-23 10:27:19.353272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.353313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.353373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.353387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.353445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.353458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.132 #10 NEW cov: 11950 ft: 12368 corp: 3/49b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 CopyPart- 00:06:31.132 [2024-07-23 10:27:19.403100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.403128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.132 #16 NEW cov: 11956 ft: 12995 corp: 4/63b lim: 40 exec/s: 0 rss: 71Mb L: 14/24 MS: 1 InsertRepeatedBytes- 00:06:31.132 [2024-07-23 10:27:19.443171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.443198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.132 #22 NEW cov: 12041 ft: 13205 corp: 5/78b lim: 40 exec/s: 0 rss: 71Mb L: 15/24 MS: 1 InsertByte- 00:06:31.132 [2024-07-23 10:27:19.493828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.493856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.493918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.493932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.493992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.494006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.494066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.494080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.494140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:e2e2e2e2 cdw11:e200000b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.494157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.132 #25 NEW cov: 12041 ft: 13851 corp: 6/118b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 3 ChangeBit-InsertRepeatedBytes-InsertRepeatedBytes- 00:06:31.132 [2024-07-23 10:27:19.533691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a797979 cdw11:79797979 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.533717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.533773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:7979ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.533792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.533850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.533863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.132 #26 NEW cov: 12041 ft: 13908 corp: 7/146b lim: 40 exec/s: 0 rss: 72Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:06:31.132 [2024-07-23 10:27:19.583575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.583601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.132 #27 NEW cov: 12041 ft: 14046 corp: 8/160b lim: 40 exec/s: 0 rss: 72Mb L: 14/40 MS: 1 ChangeByte- 00:06:31.132 [2024-07-23 10:27:19.623941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.623966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.624022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.624036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.132 [2024-07-23 10:27:19.624108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:444e4444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.132 [2024-07-23 10:27:19.624122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 #28 NEW cov: 12041 ft: 14073 corp: 9/184b lim: 40 exec/s: 0 rss: 72Mb L: 24/40 MS: 1 ChangeByte- 00:06:31.392 [2024-07-23 10:27:19.674117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.674142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 [2024-07-23 10:27:19.674200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.674213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.392 [2024-07-23 10:27:19.674269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.674286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 #29 NEW cov: 12041 ft: 14102 corp: 10/208b lim: 40 exec/s: 0 rss: 72Mb L: 24/40 MS: 1 InsertRepeatedBytes- 00:06:31.392 [2024-07-23 10:27:19.714169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.714195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 [2024-07-23 10:27:19.714267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.714281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.392 [2024-07-23 10:27:19.714338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:14141c14 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.714352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 #30 NEW cov: 12041 ft: 14164 corp: 11/232b lim: 40 exec/s: 0 rss: 72Mb L: 24/40 MS: 1 ChangeBit- 00:06:31.392 [2024-07-23 10:27:19.764306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.764332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 [2024-07-23 10:27:19.764389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:44444454 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.764402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.392 [2024-07-23 10:27:19.764459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.764472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 #31 NEW cov: 12041 ft: 14189 corp: 12/256b lim: 40 exec/s: 0 rss: 72Mb L: 24/40 MS: 1 ChangeBit- 00:06:31.392 [2024-07-23 10:27:19.804165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:08474747 cdw11:47474747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.804192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 #34 NEW cov: 12041 ft: 14205 corp: 13/271b lim: 40 exec/s: 0 rss: 72Mb L: 15/40 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:31.392 [2024-07-23 10:27:19.844241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff3affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.844266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 #35 NEW cov: 12041 ft: 14261 corp: 14/286b lim: 40 exec/s: 0 rss: 72Mb L: 15/40 MS: 1 ChangeByte- 00:06:31.392 [2024-07-23 10:27:19.884356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.392 [2024-07-23 10:27:19.884381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.652 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:31.652 #36 NEW cov: 12064 ft: 14298 corp: 15/297b lim: 40 exec/s: 0 rss: 72Mb L: 11/40 MS: 1 EraseBytes- 00:06:31.652 [2024-07-23 10:27:19.924728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:19.924756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:19.924818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:19.924831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:19.924887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:19.924901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.652 #37 NEW cov: 12064 ft: 14306 corp: 16/326b lim: 40 exec/s: 0 rss: 72Mb L: 29/40 MS: 1 EraseBytes- 00:06:31.652 [2024-07-23 10:27:19.974914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:000000e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:19.974938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:19.975015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:19.975029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:19.975091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:19.975103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.652 #38 NEW cov: 12064 ft: 14324 corp: 17/355b lim: 40 exec/s: 38 rss: 72Mb L: 29/40 MS: 1 ChangeBinInt- 00:06:31.652 [2024-07-23 10:27:20.025173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:0a141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:20.025199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:20.025272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:14140000 cdw11:00e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:20.025287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:20.025344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e2141414 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:20.025357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:20.025417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:141414e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:20.025430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.652 #39 NEW cov: 12064 ft: 14366 corp: 18/388b lim: 40 exec/s: 39 rss: 72Mb L: 33/40 MS: 1 CrossOver- 00:06:31.652 [2024-07-23 10:27:20.074953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:20.074991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.652 #40 NEW cov: 12064 ft: 14404 corp: 19/399b lim: 40 exec/s: 40 rss: 73Mb L: 11/40 MS: 1 ShuffleBytes- 00:06:31.652 [2024-07-23 10:27:20.125185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:20.125211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.652 [2024-07-23 10:27:20.125269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.652 [2024-07-23 10:27:20.125283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.652 #41 NEW cov: 12064 ft: 14592 corp: 20/416b lim: 40 exec/s: 41 rss: 73Mb L: 17/40 MS: 1 EraseBytes- 00:06:31.911 [2024-07-23 10:27:20.165401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3044443a cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.911 [2024-07-23 10:27:20.165427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.911 [2024-07-23 10:27:20.165503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.911 [2024-07-23 10:27:20.165517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.911 [2024-07-23 10:27:20.165572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.911 [2024-07-23 10:27:20.165586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.911 #42 NEW cov: 12064 ft: 14645 corp: 21/441b lim: 40 exec/s: 42 rss: 73Mb L: 25/40 MS: 1 InsertByte- 00:06:31.912 [2024-07-23 10:27:20.205276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.912 [2024-07-23 10:27:20.205301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.912 #43 NEW cov: 12064 ft: 14683 corp: 22/455b lim: 40 exec/s: 43 rss: 73Mb L: 14/40 MS: 1 CopyPart- 00:06:31.912 [2024-07-23 10:27:20.245365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff3affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.912 [2024-07-23 10:27:20.245389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.912 #44 NEW cov: 12064 ft: 14718 corp: 23/470b lim: 40 exec/s: 44 rss: 73Mb L: 15/40 MS: 1 ShuffleBytes- 00:06:31.912 [2024-07-23 10:27:20.295544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.912 [2024-07-23 10:27:20.295568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.912 #45 NEW cov: 12064 ft: 14727 corp: 24/484b lim: 40 exec/s: 45 rss: 73Mb L: 14/40 MS: 1 ShuffleBytes- 00:06:31.912 [2024-07-23 10:27:20.345658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.912 [2024-07-23 10:27:20.345683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.912 #46 NEW cov: 12064 ft: 14749 corp: 25/495b lim: 40 exec/s: 46 rss: 73Mb L: 11/40 MS: 1 ShuffleBytes- 00:06:31.912 [2024-07-23 10:27:20.395813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.912 [2024-07-23 10:27:20.395841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.171 #47 NEW cov: 12064 ft: 14775 corp: 26/510b lim: 40 exec/s: 47 rss: 73Mb L: 15/40 MS: 1 ShuffleBytes- 00:06:32.171 [2024-07-23 10:27:20.445965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff03ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.445989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.171 #48 NEW cov: 12064 ft: 14795 corp: 27/524b lim: 40 exec/s: 48 rss: 73Mb L: 14/40 MS: 1 ChangeBinInt- 00:06:32.171 [2024-07-23 10:27:20.496081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.496106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.171 #49 NEW cov: 12064 ft: 14797 corp: 28/538b lim: 40 exec/s: 49 rss: 73Mb L: 14/40 MS: 1 ChangeBinInt- 00:06:32.171 [2024-07-23 10:27:20.536331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:b90affff cdw11:ffff3aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.536355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.171 [2024-07-23 10:27:20.536413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.536427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.171 #50 NEW cov: 12064 ft: 14809 corp: 29/554b lim: 40 exec/s: 50 rss: 73Mb L: 16/40 MS: 1 InsertByte- 00:06:32.171 [2024-07-23 10:27:20.576547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:30444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.576573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.171 [2024-07-23 10:27:20.576633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44c44444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.576647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.171 [2024-07-23 10:27:20.576707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.576721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.171 #51 NEW cov: 12064 ft: 14813 corp: 30/578b lim: 40 exec/s: 51 rss: 73Mb L: 24/40 MS: 1 ChangeBit- 00:06:32.171 [2024-07-23 10:27:20.616452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.616477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.171 #52 NEW cov: 12064 ft: 14825 corp: 31/589b lim: 40 exec/s: 52 rss: 73Mb L: 11/40 MS: 1 CopyPart- 00:06:32.171 [2024-07-23 10:27:20.656567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01189a6c cdw11:4175b2d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.171 [2024-07-23 10:27:20.656594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.431 #53 NEW cov: 12064 ft: 14836 corp: 32/603b lim: 40 exec/s: 53 rss: 73Mb L: 14/40 MS: 1 CMP- DE: "\001\030\232lAu\262\330"- 00:06:32.431 [2024-07-23 10:27:20.706663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.431 [2024-07-23 10:27:20.706692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.431 #54 NEW cov: 12064 ft: 14856 corp: 33/611b lim: 40 exec/s: 54 rss: 73Mb L: 8/40 MS: 1 EraseBytes- 00:06:32.431 [2024-07-23 10:27:20.756840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffe2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.431 [2024-07-23 10:27:20.756866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.431 #55 NEW cov: 12064 ft: 14867 corp: 34/626b lim: 40 exec/s: 55 rss: 73Mb L: 15/40 MS: 1 CrossOver- 00:06:32.431 [2024-07-23 10:27:20.807094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:b90affff cdw11:ffffff3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.431 [2024-07-23 10:27:20.807120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.431 [2024-07-23 10:27:20.807195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.431 [2024-07-23 10:27:20.807209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.431 #56 NEW cov: 12064 ft: 14872 corp: 35/642b lim: 40 exec/s: 56 rss: 73Mb L: 16/40 MS: 1 ShuffleBytes- 00:06:32.431 [2024-07-23 10:27:20.857096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffff0800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.431 [2024-07-23 10:27:20.857121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.431 #57 NEW cov: 12064 ft: 14878 corp: 36/657b lim: 40 exec/s: 57 rss: 74Mb L: 15/40 MS: 1 ChangeBinInt- 00:06:32.431 [2024-07-23 10:27:20.897180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffe2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.431 [2024-07-23 10:27:20.897206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.692 #58 NEW cov: 12064 ft: 14884 corp: 37/672b lim: 40 exec/s: 58 rss: 74Mb L: 15/40 MS: 1 ChangeBit- 00:06:32.692 [2024-07-23 10:27:20.947348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff03ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.692 [2024-07-23 10:27:20.947374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.693 #59 NEW cov: 12064 ft: 14886 corp: 38/681b lim: 40 exec/s: 59 rss: 74Mb L: 9/40 MS: 1 EraseBytes- 00:06:32.693 [2024-07-23 10:27:20.987722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a1414fe cdw11:ffffff14 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.693 [2024-07-23 10:27:20.987748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.693 [2024-07-23 10:27:20.987806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.693 [2024-07-23 10:27:20.987822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.693 [2024-07-23 10:27:20.987879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:14141c14 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.693 [2024-07-23 10:27:20.987895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.693 #60 NEW cov: 12064 ft: 14897 corp: 39/705b lim: 40 exec/s: 30 rss: 74Mb L: 24/40 MS: 1 CMP- DE: "\376\377\377\377"- 00:06:32.693 #60 DONE cov: 12064 ft: 14897 corp: 39/705b lim: 40 exec/s: 30 rss: 74Mb 00:06:32.693 ###### Recommended dictionary. ###### 00:06:32.693 "\001\030\232lAu\262\330" # Uses: 0 00:06:32.693 "\376\377\377\377" # Uses: 0 00:06:32.693 ###### End of recommended dictionary. ###### 00:06:32.693 Done 60 runs in 2 second(s) 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:32.693 10:27:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:32.693 [2024-07-23 10:27:21.186223] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:32.693 [2024-07-23 10:27:21.186292] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429797 ] 00:06:32.952 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.212 [2024-07-23 10:27:21.484234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.212 [2024-07-23 10:27:21.517625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.212 [2024-07-23 10:27:21.570285] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.212 [2024-07-23 10:27:21.586602] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:33.212 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.212 INFO: Seed: 2304606183 00:06:33.212 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:33.212 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:33.212 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:33.212 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.212 #2 INITED exec/s: 0 rss: 64Mb 00:06:33.212 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.212 This may also happen if the target rejected all inputs we tried so far 00:06:33.212 [2024-07-23 10:27:21.631337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.212 [2024-07-23 10:27:21.631378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.212 [2024-07-23 10:27:21.631429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.212 [2024-07-23 10:27:21.631445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.471 NEW_FUNC[1/692]: 0x4a2590 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:33.471 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.471 #10 NEW cov: 11832 ft: 11812 corp: 2/24b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 3 ShuffleBytes-CMP-InsertRepeatedBytes- DE: "\001\000\000\001"- 00:06:33.731 [2024-07-23 10:27:21.982169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:21.982219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.731 [2024-07-23 10:27:21.982269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:21.982285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.731 #11 NEW cov: 11962 ft: 12453 corp: 3/47b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ChangeByte- 00:06:33.731 [2024-07-23 10:27:22.062234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:22.062272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.731 [2024-07-23 10:27:22.062321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:22.062338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.731 #17 NEW cov: 11968 ft: 12707 corp: 4/70b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ShuffleBytes- 00:06:33.731 [2024-07-23 10:27:22.142384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f701caca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:22.142420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.731 #22 NEW cov: 12053 ft: 13720 corp: 5/79b lim: 40 exec/s: 0 rss: 71Mb L: 9/23 MS: 5 CrossOver-ChangeByte-ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:06:33.731 [2024-07-23 10:27:22.202730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:22.202766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.731 [2024-07-23 10:27:22.202810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00bf0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:22.202827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.731 [2024-07-23 10:27:22.202859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.731 [2024-07-23 10:27:22.202875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.990 #23 NEW cov: 12053 ft: 14045 corp: 6/103b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 InsertByte- 00:06:33.990 [2024-07-23 10:27:22.282951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:000000f5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.282985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.990 [2024-07-23 10:27:22.283035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.283051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.990 [2024-07-23 10:27:22.283082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00f50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.283097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.990 [2024-07-23 10:27:22.283126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.283142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.990 #29 NEW cov: 12053 ft: 14476 corp: 7/137b lim: 40 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 CopyPart- 00:06:33.990 [2024-07-23 10:27:22.342897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:c5f701ca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.342928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.990 #30 NEW cov: 12053 ft: 14582 corp: 8/147b lim: 40 exec/s: 0 rss: 71Mb L: 10/34 MS: 1 InsertByte- 00:06:33.990 [2024-07-23 10:27:22.423261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a01cbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.423292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.990 [2024-07-23 10:27:22.423341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.423357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.990 [2024-07-23 10:27:22.423388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:cbcb0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.423404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.990 [2024-07-23 10:27:22.423433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.423449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.990 #31 NEW cov: 12053 ft: 14614 corp: 9/186b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:33.990 [2024-07-23 10:27:22.483258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f701caca cdw11:ca010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.990 [2024-07-23 10:27:22.483288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.250 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:34.250 #32 NEW cov: 12076 ft: 14638 corp: 10/195b lim: 40 exec/s: 0 rss: 71Mb L: 9/39 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:06:34.250 [2024-07-23 10:27:22.543565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a05cbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.250 [2024-07-23 10:27:22.543595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.250 [2024-07-23 10:27:22.543643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.250 [2024-07-23 10:27:22.543659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.250 [2024-07-23 10:27:22.543690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:cbcb0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.250 [2024-07-23 10:27:22.543705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.250 [2024-07-23 10:27:22.543735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.250 [2024-07-23 10:27:22.543750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.250 #33 NEW cov: 12076 ft: 14672 corp: 11/234b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ChangeBit- 00:06:34.250 [2024-07-23 10:27:22.623582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f701caca cdw11:ca010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.250 [2024-07-23 10:27:22.623612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.250 #34 NEW cov: 12076 ft: 14726 corp: 12/244b lim: 40 exec/s: 34 rss: 72Mb L: 10/39 MS: 1 CopyPart- 00:06:34.250 [2024-07-23 10:27:22.703792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.250 [2024-07-23 10:27:22.703823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.250 #35 NEW cov: 12076 ft: 14738 corp: 13/256b lim: 40 exec/s: 35 rss: 72Mb L: 12/39 MS: 1 EraseBytes- 00:06:34.510 [2024-07-23 10:27:22.764014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.510 [2024-07-23 10:27:22.764046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.510 [2024-07-23 10:27:22.764097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00bf0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.510 [2024-07-23 10:27:22.764113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.510 #36 NEW cov: 12076 ft: 14815 corp: 14/275b lim: 40 exec/s: 36 rss: 72Mb L: 19/39 MS: 1 EraseBytes- 00:06:34.510 [2024-07-23 10:27:22.844153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f701caca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.510 [2024-07-23 10:27:22.844184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.510 #37 NEW cov: 12076 ft: 14833 corp: 15/288b lim: 40 exec/s: 37 rss: 72Mb L: 13/39 MS: 1 CrossOver- 00:06:34.510 [2024-07-23 10:27:22.924489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.510 [2024-07-23 10:27:22.924519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.510 [2024-07-23 10:27:22.924568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.510 [2024-07-23 10:27:22.924589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.510 [2024-07-23 10:27:22.924620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.510 [2024-07-23 10:27:22.924635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.510 #38 NEW cov: 12076 ft: 14879 corp: 16/319b lim: 40 exec/s: 38 rss: 72Mb L: 31/39 MS: 1 CopyPart- 00:06:34.510 [2024-07-23 10:27:22.974545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01cacaca cdw11:cacaf7ca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.510 [2024-07-23 10:27:22.974577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.769 #39 NEW cov: 12076 ft: 14896 corp: 17/328b lim: 40 exec/s: 39 rss: 72Mb L: 9/39 MS: 1 ShuffleBytes- 00:06:34.769 [2024-07-23 10:27:23.024739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01cacaca cdw11:ca2b2b2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.769 [2024-07-23 10:27:23.024770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.769 [2024-07-23 10:27:23.024825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2b2b2b2b cdw11:2b2b2b2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.769 [2024-07-23 10:27:23.024841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.769 [2024-07-23 10:27:23.024871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2b2b2b2b cdw11:2b2b2b2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.769 [2024-07-23 10:27:23.024887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.769 #40 NEW cov: 12076 ft: 14916 corp: 18/358b lim: 40 exec/s: 40 rss: 72Mb L: 30/39 MS: 1 InsertRepeatedBytes- 00:06:34.769 [2024-07-23 10:27:23.104871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3b08fe3c cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.769 [2024-07-23 10:27:23.104901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.769 #41 NEW cov: 12076 ft: 14933 corp: 19/368b lim: 40 exec/s: 41 rss: 72Mb L: 10/39 MS: 1 ChangeBinInt- 00:06:34.770 [2024-07-23 10:27:23.185137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.770 [2024-07-23 10:27:23.185166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.770 [2024-07-23 10:27:23.185214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.770 [2024-07-23 10:27:23.185230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.770 #42 NEW cov: 12076 ft: 14949 corp: 20/391b lim: 40 exec/s: 42 rss: 72Mb L: 23/39 MS: 1 ShuffleBytes- 00:06:34.770 [2024-07-23 10:27:23.235171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00f701ca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.770 [2024-07-23 10:27:23.235200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.029 #43 NEW cov: 12076 ft: 14954 corp: 21/405b lim: 40 exec/s: 43 rss: 72Mb L: 14/39 MS: 1 InsertRepeatedBytes- 00:06:35.029 [2024-07-23 10:27:23.295384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3b08fe3c cdw11:cacaffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.029 [2024-07-23 10:27:23.295417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.029 [2024-07-23 10:27:23.295466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.029 [2024-07-23 10:27:23.295482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.029 #45 NEW cov: 12076 ft: 14966 corp: 22/422b lim: 40 exec/s: 45 rss: 72Mb L: 17/39 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:35.029 [2024-07-23 10:27:23.375646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.029 [2024-07-23 10:27:23.375678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.029 [2024-07-23 10:27:23.375712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:0000f500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.029 [2024-07-23 10:27:23.375727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.029 #46 NEW cov: 12076 ft: 14979 corp: 23/441b lim: 40 exec/s: 46 rss: 72Mb L: 19/39 MS: 1 CopyPart- 00:06:35.029 [2024-07-23 10:27:23.455767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.029 [2024-07-23 10:27:23.455807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.029 #47 NEW cov: 12076 ft: 14987 corp: 24/453b lim: 40 exec/s: 47 rss: 72Mb L: 12/39 MS: 1 ShuffleBytes- 00:06:35.289 [2024-07-23 10:27:23.536267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a05cb00 cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.289 [2024-07-23 10:27:23.536300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.289 [2024-07-23 10:27:23.536334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcbcbcb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.289 [2024-07-23 10:27:23.536350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.289 [2024-07-23 10:27:23.536380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:cbcbcb00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.289 [2024-07-23 10:27:23.536396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.289 [2024-07-23 10:27:23.536425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.289 [2024-07-23 10:27:23.536440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.289 [2024-07-23 10:27:23.536471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.289 [2024-07-23 10:27:23.536486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.289 #48 NEW cov: 12076 ft: 15115 corp: 25/493b lim: 40 exec/s: 48 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:06:35.289 [2024-07-23 10:27:23.616167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a0f701ca cdw11:caca0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.289 [2024-07-23 10:27:23.616198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.289 #49 NEW cov: 12076 ft: 15140 corp: 26/503b lim: 40 exec/s: 24 rss: 73Mb L: 10/40 MS: 1 InsertByte- 00:06:35.289 #49 DONE cov: 12076 ft: 15140 corp: 26/503b lim: 40 exec/s: 24 rss: 73Mb 00:06:35.289 ###### Recommended dictionary. ###### 00:06:35.289 "\001\000\000\001" # Uses: 3 00:06:35.289 ###### End of recommended dictionary. ###### 00:06:35.289 Done 49 runs in 2 second(s) 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:35.289 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.549 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.549 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.549 10:27:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:35.549 [2024-07-23 10:27:23.819682] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:35.549 [2024-07-23 10:27:23.819761] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430170 ] 00:06:35.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.808 [2024-07-23 10:27:24.117874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.808 [2024-07-23 10:27:24.151065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.808 [2024-07-23 10:27:24.203752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.808 [2024-07-23 10:27:24.220079] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:35.808 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.808 INFO: Seed: 644635220 00:06:35.808 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:35.808 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:35.808 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:35.808 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.808 #2 INITED exec/s: 0 rss: 63Mb 00:06:35.808 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:35.808 This may also happen if the target rejected all inputs we tried so far 00:06:35.808 [2024-07-23 10:27:24.285958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.808 [2024-07-23 10:27:24.285996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.808 [2024-07-23 10:27:24.286066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.808 [2024-07-23 10:27:24.286084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.808 [2024-07-23 10:27:24.286156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.808 [2024-07-23 10:27:24.286175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.808 [2024-07-23 10:27:24.286249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.808 [2024-07-23 10:27:24.286267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.326 NEW_FUNC[1/692]: 0x4a4300 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:36.326 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:36.326 #22 NEW cov: 11830 ft: 11831 corp: 2/38b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 5 ChangeByte-InsertByte-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:06:36.326 [2024-07-23 10:27:24.626894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.626961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.326 [2024-07-23 10:27:24.627065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.627101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.326 [2024-07-23 10:27:24.627201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.627237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.326 [2024-07-23 10:27:24.627340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.627378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.326 #23 NEW cov: 11960 ft: 12601 corp: 3/76b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertByte- 00:06:36.326 [2024-07-23 10:27:24.686717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.686750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.326 [2024-07-23 10:27:24.686821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.686842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.326 [2024-07-23 10:27:24.686910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.686935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.326 [2024-07-23 10:27:24.687004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.326 [2024-07-23 10:27:24.687025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.327 #24 NEW cov: 11966 ft: 12827 corp: 4/113b lim: 40 exec/s: 0 rss: 70Mb L: 37/38 MS: 1 ShuffleBytes- 00:06:36.327 [2024-07-23 10:27:24.726775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.726807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.327 [2024-07-23 10:27:24.726863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.726877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.327 [2024-07-23 10:27:24.726929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.726943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.327 [2024-07-23 10:27:24.726997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.727010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.327 #25 NEW cov: 12051 ft: 13060 corp: 5/151b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 CopyPart- 00:06:36.327 [2024-07-23 10:27:24.776939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.776963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.327 [2024-07-23 10:27:24.777020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.777034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.327 [2024-07-23 10:27:24.777087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.777101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.327 [2024-07-23 10:27:24.777155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.327 [2024-07-23 10:27:24.777168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.327 #26 NEW cov: 12051 ft: 13114 corp: 6/188b lim: 40 exec/s: 0 rss: 71Mb L: 37/38 MS: 1 ChangeBinInt- 00:06:36.587 [2024-07-23 10:27:24.827059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.827087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.827144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000f7ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.827162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.827217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.827230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.827284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.827297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.587 #27 NEW cov: 12051 ft: 13162 corp: 7/225b lim: 40 exec/s: 0 rss: 71Mb L: 37/38 MS: 1 ChangeBinInt- 00:06:36.587 [2024-07-23 10:27:24.867194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.867222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.867266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.867280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.867323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.867337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.867390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.867403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.587 #28 NEW cov: 12051 ft: 13332 corp: 8/262b lim: 40 exec/s: 0 rss: 71Mb L: 37/38 MS: 1 ShuffleBytes- 00:06:36.587 [2024-07-23 10:27:24.907336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.907363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.907437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.907451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.907508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.907521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.907575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.907589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.587 #29 NEW cov: 12051 ft: 13371 corp: 9/300b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 ChangeBit- 00:06:36.587 [2024-07-23 10:27:24.947441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.947472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.947530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.947544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.947599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fb000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.947612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.947665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.947678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.587 #30 NEW cov: 12051 ft: 13416 corp: 10/337b lim: 40 exec/s: 0 rss: 71Mb L: 37/38 MS: 1 ChangeBinInt- 00:06:36.587 [2024-07-23 10:27:24.997557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.997585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.997641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.997654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.997707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fb000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.997720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:24.997775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:24.997793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.587 #31 NEW cov: 12051 ft: 13440 corp: 11/374b lim: 40 exec/s: 0 rss: 72Mb L: 37/38 MS: 1 ShuffleBytes- 00:06:36.587 [2024-07-23 10:27:25.047708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:25.047736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:25.047795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:25.047810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:25.047865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.587 [2024-07-23 10:27:25.047879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.587 [2024-07-23 10:27:25.047932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.588 [2024-07-23 10:27:25.047949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.588 #32 NEW cov: 12051 ft: 13481 corp: 12/411b lim: 40 exec/s: 0 rss: 72Mb L: 37/38 MS: 1 ChangeBit- 00:06:36.847 [2024-07-23 10:27:25.087696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.847 [2024-07-23 10:27:25.087723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.847 [2024-07-23 10:27:25.087782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.847 [2024-07-23 10:27:25.087796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.847 [2024-07-23 10:27:25.087849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.847 [2024-07-23 10:27:25.087862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.847 #33 NEW cov: 12051 ft: 13831 corp: 13/441b lim: 40 exec/s: 0 rss: 72Mb L: 30/38 MS: 1 EraseBytes- 00:06:36.847 [2024-07-23 10:27:25.127902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.847 [2024-07-23 10:27:25.127929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.847 [2024-07-23 10:27:25.127983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.847 [2024-07-23 10:27:25.127996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.847 [2024-07-23 10:27:25.128048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.847 [2024-07-23 10:27:25.128061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.847 [2024-07-23 10:27:25.128114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.847 [2024-07-23 10:27:25.128127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.847 #34 NEW cov: 12051 ft: 13844 corp: 14/479b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 ChangeBit- 00:06:36.848 [2024-07-23 10:27:25.168082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.168109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.168166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.168179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.168232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.168245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.168299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.168316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.848 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:36.848 #35 NEW cov: 12074 ft: 13943 corp: 15/518b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertByte- 00:06:36.848 [2024-07-23 10:27:25.208168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.208195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.208250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.208264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.208318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.208331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.208386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.208399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.848 #36 NEW cov: 12074 ft: 13962 corp: 16/555b lim: 40 exec/s: 0 rss: 72Mb L: 37/39 MS: 1 ChangeBit- 00:06:36.848 [2024-07-23 10:27:25.257847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a262626 cdw11:26262626 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.257873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.848 #37 NEW cov: 12074 ft: 14717 corp: 17/565b lim: 40 exec/s: 37 rss: 72Mb L: 10/39 MS: 1 InsertRepeatedBytes- 00:06:36.848 [2024-07-23 10:27:25.298565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:a0000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.298590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.298646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.298660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.298712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.298725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.298781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.298795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.848 [2024-07-23 10:27:25.298850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:ff004d0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.848 [2024-07-23 10:27:25.298863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.848 #38 NEW cov: 12074 ft: 14807 corp: 18/605b lim: 40 exec/s: 38 rss: 72Mb L: 40/40 MS: 1 InsertByte- 00:06:37.107 [2024-07-23 10:27:25.348604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.348631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.348689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.348703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.348760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.348774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.348835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00fff600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.348848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.107 #39 NEW cov: 12074 ft: 14817 corp: 19/642b lim: 40 exec/s: 39 rss: 72Mb L: 37/40 MS: 1 ChangeBinInt- 00:06:37.107 [2024-07-23 10:27:25.388476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.388501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.388554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.388567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.388621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.388634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.107 #40 NEW cov: 12074 ft: 14847 corp: 20/672b lim: 40 exec/s: 40 rss: 72Mb L: 30/40 MS: 1 ShuffleBytes- 00:06:37.107 [2024-07-23 10:27:25.438797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.438821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.438875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.438888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.438955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.438969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.439024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.439036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.107 #41 NEW cov: 12074 ft: 14858 corp: 21/710b lim: 40 exec/s: 41 rss: 72Mb L: 38/40 MS: 1 InsertByte- 00:06:37.107 [2024-07-23 10:27:25.479096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.479122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.479177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.479192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.479246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.479259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.479316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.479329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.107 #42 NEW cov: 12074 ft: 14933 corp: 22/747b lim: 40 exec/s: 42 rss: 72Mb L: 37/40 MS: 1 EraseBytes- 00:06:37.107 [2024-07-23 10:27:25.518953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.518978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.519036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.519049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.519102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.519115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.519170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00003d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.519184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.107 #43 NEW cov: 12074 ft: 14942 corp: 23/785b lim: 40 exec/s: 43 rss: 72Mb L: 38/40 MS: 1 CopyPart- 00:06:37.107 [2024-07-23 10:27:25.569131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.569156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.569209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.107 [2024-07-23 10:27:25.569223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.107 [2024-07-23 10:27:25.569279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fb000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.108 [2024-07-23 10:27:25.569292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.108 [2024-07-23 10:27:25.569348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.108 [2024-07-23 10:27:25.569365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.108 #44 NEW cov: 12074 ft: 15010 corp: 24/822b lim: 40 exec/s: 44 rss: 72Mb L: 37/40 MS: 1 ChangeByte- 00:06:37.368 [2024-07-23 10:27:25.609400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.609424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.609495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.609509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.609564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.609578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.609643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.609657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.609710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00004000 cdw11:00004d0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.609739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.368 #45 NEW cov: 12074 ft: 15045 corp: 25/862b lim: 40 exec/s: 45 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:37.368 [2024-07-23 10:27:25.659349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.659374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.659446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000f7ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.659460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.659514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.659528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.659581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.659595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.368 #46 NEW cov: 12074 ft: 15057 corp: 26/899b lim: 40 exec/s: 46 rss: 72Mb L: 37/40 MS: 1 ChangeByte- 00:06:37.368 [2024-07-23 10:27:25.709340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.709365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.709420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.709433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.709503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.709517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.368 #47 NEW cov: 12074 ft: 15063 corp: 27/929b lim: 40 exec/s: 47 rss: 72Mb L: 30/40 MS: 1 ShuffleBytes- 00:06:37.368 [2024-07-23 10:27:25.749637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.749662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.749716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.749729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.749786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.749799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.749854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.749867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.368 #48 NEW cov: 12074 ft: 15075 corp: 28/968b lim: 40 exec/s: 48 rss: 72Mb L: 39/40 MS: 1 CopyPart- 00:06:37.368 [2024-07-23 10:27:25.799608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.799633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.799703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.799718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.799770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.799786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.368 #49 NEW cov: 12074 ft: 15085 corp: 29/998b lim: 40 exec/s: 49 rss: 72Mb L: 30/40 MS: 1 EraseBytes- 00:06:37.368 [2024-07-23 10:27:25.839873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.839900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.839956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.839970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.840033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.840046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.368 [2024-07-23 10:27:25.840100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:f6000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.368 [2024-07-23 10:27:25.840113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.628 #50 NEW cov: 12074 ft: 15110 corp: 30/1037b lim: 40 exec/s: 50 rss: 73Mb L: 39/40 MS: 1 CrossOver- 00:06:37.628 [2024-07-23 10:27:25.890026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.628 [2024-07-23 10:27:25.890052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.628 [2024-07-23 10:27:25.890105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000f7ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.628 [2024-07-23 10:27:25.890119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.628 [2024-07-23 10:27:25.890172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.628 [2024-07-23 10:27:25.890186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.628 [2024-07-23 10:27:25.890237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:f7ff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.628 [2024-07-23 10:27:25.890250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.628 #51 NEW cov: 12074 ft: 15114 corp: 31/1076b lim: 40 exec/s: 51 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:06:37.628 [2024-07-23 10:27:25.929839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.628 [2024-07-23 10:27:25.929864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:25.929916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:25.929930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.629 #52 NEW cov: 12074 ft: 15348 corp: 32/1097b lim: 40 exec/s: 52 rss: 73Mb L: 21/40 MS: 1 EraseBytes- 00:06:37.629 [2024-07-23 10:27:25.980316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:25.980344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:25.980415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:25.980429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:25.980484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:25.980497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:25.980552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:f6000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:25.980566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.629 #53 NEW cov: 12074 ft: 15366 corp: 33/1132b lim: 40 exec/s: 53 rss: 73Mb L: 35/40 MS: 1 EraseBytes- 00:06:37.629 [2024-07-23 10:27:26.020462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.020488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.020558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.020573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.020627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00730000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.020640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.020694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.020708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.629 #54 NEW cov: 12074 ft: 15368 corp: 34/1170b lim: 40 exec/s: 54 rss: 73Mb L: 38/40 MS: 1 InsertByte- 00:06:37.629 [2024-07-23 10:27:26.060656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.060682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.060751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.060764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.060820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000002f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.060833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.060886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.060899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.060951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00004000 cdw11:00004d0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.060965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.629 #55 NEW cov: 12074 ft: 15378 corp: 35/1210b lim: 40 exec/s: 55 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:06:37.629 [2024-07-23 10:27:26.100638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000d000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.100663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.100722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.100737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.100792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.100805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.629 [2024-07-23 10:27:26.100856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00fff600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.629 [2024-07-23 10:27:26.100868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.629 #56 NEW cov: 12074 ft: 15405 corp: 36/1247b lim: 40 exec/s: 56 rss: 73Mb L: 37/40 MS: 1 ChangeByte- 00:06:37.889 [2024-07-23 10:27:26.140929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.889 [2024-07-23 10:27:26.140954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.889 [2024-07-23 10:27:26.141008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.889 [2024-07-23 10:27:26.141021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.141074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.141087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.141138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:003f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.141151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.141205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00004000 cdw11:00004d0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.141218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.890 #57 NEW cov: 12074 ft: 15444 corp: 37/1287b lim: 40 exec/s: 57 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:06:37.890 [2024-07-23 10:27:26.180871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.180897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.180953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.180966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.181020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.181033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.181087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.181104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.890 #58 NEW cov: 12074 ft: 15450 corp: 38/1322b lim: 40 exec/s: 58 rss: 73Mb L: 35/40 MS: 1 EraseBytes- 00:06:37.890 [2024-07-23 10:27:26.221020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.221050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.221106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000025 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.221121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.221177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:21000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.221190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.221242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.221256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.890 #59 NEW cov: 12074 ft: 15453 corp: 39/1359b lim: 40 exec/s: 59 rss: 73Mb L: 37/40 MS: 1 ChangeByte- 00:06:37.890 [2024-07-23 10:27:26.261165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:002e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.261192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.261249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00003d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.261264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.261318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.261332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.890 [2024-07-23 10:27:26.261386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.890 [2024-07-23 10:27:26.261399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.890 #60 NEW cov: 12074 ft: 15492 corp: 40/1397b lim: 40 exec/s: 30 rss: 73Mb L: 38/40 MS: 1 ChangeByte- 00:06:37.890 #60 DONE cov: 12074 ft: 15492 corp: 40/1397b lim: 40 exec/s: 30 rss: 73Mb 00:06:37.890 Done 60 runs in 2 second(s) 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.150 10:27:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:38.151 [2024-07-23 10:27:26.468191] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:38.151 [2024-07-23 10:27:26.468287] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430472 ] 00:06:38.151 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.410 [2024-07-23 10:27:26.775637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.410 [2024-07-23 10:27:26.808432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.410 [2024-07-23 10:27:26.861084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.410 [2024-07-23 10:27:26.877402] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:38.410 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.410 INFO: Seed: 3299639750 00:06:38.410 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:38.410 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:38.410 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:38.410 INFO: A corpus is not provided, starting from an empty corpus 00:06:38.410 #2 INITED exec/s: 0 rss: 63Mb 00:06:38.410 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:38.410 This may also happen if the target rejected all inputs we tried so far 00:06:38.670 [2024-07-23 10:27:26.926228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.670 [2024-07-23 10:27:26.926258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.929 NEW_FUNC[1/691]: 0x4a5ec0 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:38.929 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:38.929 #11 NEW cov: 11818 ft: 11819 corp: 2/14b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 4 CopyPart-CrossOver-CrossOver-InsertRepeatedBytes- 00:06:38.929 [2024-07-23 10:27:27.246971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.929 [2024-07-23 10:27:27.247019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.929 #13 NEW cov: 11948 ft: 12539 corp: 3/29b lim: 40 exec/s: 0 rss: 70Mb L: 15/15 MS: 2 CrossOver-CrossOver- 00:06:38.929 [2024-07-23 10:27:27.286946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.929 [2024-07-23 10:27:27.286972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.929 #19 NEW cov: 11954 ft: 12733 corp: 4/44b lim: 40 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 ChangeByte- 00:06:38.929 [2024-07-23 10:27:27.337137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.929 [2024-07-23 10:27:27.337162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.929 #20 NEW cov: 12039 ft: 12988 corp: 5/57b lim: 40 exec/s: 0 rss: 71Mb L: 13/15 MS: 1 ChangeBit- 00:06:38.929 [2024-07-23 10:27:27.387269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.929 [2024-07-23 10:27:27.387294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.929 #21 NEW cov: 12039 ft: 13038 corp: 6/72b lim: 40 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 ShuffleBytes- 00:06:39.189 [2024-07-23 10:27:27.437650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.437686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.437757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0a0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.437771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.437832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.437846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.189 #22 NEW cov: 12039 ft: 13446 corp: 7/99b lim: 40 exec/s: 0 rss: 71Mb L: 27/27 MS: 1 CopyPart- 00:06:39.189 [2024-07-23 10:27:27.477512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7effffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.477537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.189 #25 NEW cov: 12039 ft: 13501 corp: 8/113b lim: 40 exec/s: 0 rss: 71Mb L: 14/27 MS: 3 ChangeByte-ChangeByte-CrossOver- 00:06:39.189 [2024-07-23 10:27:27.517723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0eff2e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.517748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.517805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.517819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.189 #26 NEW cov: 12039 ft: 13710 corp: 9/129b lim: 40 exec/s: 0 rss: 72Mb L: 16/27 MS: 1 InsertByte- 00:06:39.189 [2024-07-23 10:27:27.567862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.567890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.567945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0a0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.567958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.189 #27 NEW cov: 12039 ft: 13776 corp: 10/148b lim: 40 exec/s: 0 rss: 72Mb L: 19/27 MS: 1 EraseBytes- 00:06:39.189 [2024-07-23 10:27:27.618274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.618301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.618360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.618374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.618428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.618441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.618497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.618510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.189 #28 NEW cov: 12039 ft: 14253 corp: 11/180b lim: 40 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:06:39.189 [2024-07-23 10:27:27.658332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.658359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.658413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.658427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.658482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.658496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.189 [2024-07-23 10:27:27.658550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.189 [2024-07-23 10:27:27.658564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.449 #29 NEW cov: 12039 ft: 14294 corp: 12/212b lim: 40 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:06:39.449 [2024-07-23 10:27:27.708123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.708149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.449 #30 NEW cov: 12039 ft: 14319 corp: 13/225b lim: 40 exec/s: 0 rss: 72Mb L: 13/32 MS: 1 CMP- DE: "\017\000\000\000\000\000\000\000"- 00:06:39.449 [2024-07-23 10:27:27.758637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.758663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.449 [2024-07-23 10:27:27.758720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.758734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.449 [2024-07-23 10:27:27.758792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.758805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.449 [2024-07-23 10:27:27.758858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff29 cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.758871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.449 #31 NEW cov: 12039 ft: 14340 corp: 14/257b lim: 40 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:06:39.449 [2024-07-23 10:27:27.798503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.798529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.449 [2024-07-23 10:27:27.798583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.798597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.449 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:39.449 #32 NEW cov: 12062 ft: 14376 corp: 15/280b lim: 40 exec/s: 0 rss: 72Mb L: 23/32 MS: 1 PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:06:39.449 [2024-07-23 10:27:27.838638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.838663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.449 [2024-07-23 10:27:27.838720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff28 cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.449 [2024-07-23 10:27:27.838735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.449 #33 NEW cov: 12062 ft: 14458 corp: 16/296b lim: 40 exec/s: 0 rss: 72Mb L: 16/32 MS: 1 InsertByte- 00:06:39.450 [2024-07-23 10:27:27.878897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.450 [2024-07-23 10:27:27.878923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.450 [2024-07-23 10:27:27.878980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff3b cdw11:0f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.450 [2024-07-23 10:27:27.878995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.450 [2024-07-23 10:27:27.879050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.450 [2024-07-23 10:27:27.879066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.450 #34 NEW cov: 12062 ft: 14466 corp: 17/320b lim: 40 exec/s: 34 rss: 72Mb L: 24/32 MS: 1 InsertByte- 00:06:39.450 [2024-07-23 10:27:27.929031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.450 [2024-07-23 10:27:27.929057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.450 [2024-07-23 10:27:27.929115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff3b cdw11:0f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.450 [2024-07-23 10:27:27.929129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.450 [2024-07-23 10:27:27.929184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00030000 cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.450 [2024-07-23 10:27:27.929197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.710 #35 NEW cov: 12062 ft: 14480 corp: 18/344b lim: 40 exec/s: 35 rss: 72Mb L: 24/32 MS: 1 ChangeBinInt- 00:06:39.710 [2024-07-23 10:27:27.979018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0eff2e cdw11:ffffff06 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:27.979043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.710 [2024-07-23 10:27:27.979099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:27.979112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.710 #36 NEW cov: 12062 ft: 14484 corp: 19/360b lim: 40 exec/s: 36 rss: 72Mb L: 16/32 MS: 1 CMP- DE: "\006\000"- 00:06:39.710 [2024-07-23 10:27:28.029033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.029057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.710 #37 NEW cov: 12062 ft: 14507 corp: 20/373b lim: 40 exec/s: 37 rss: 72Mb L: 13/32 MS: 1 ChangeByte- 00:06:39.710 [2024-07-23 10:27:28.079549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.079574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.710 [2024-07-23 10:27:28.079633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.079646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.710 [2024-07-23 10:27:28.079713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.079727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.710 [2024-07-23 10:27:28.079782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff0a0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.079799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.710 #38 NEW cov: 12062 ft: 14521 corp: 21/412b lim: 40 exec/s: 38 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:39.710 [2024-07-23 10:27:28.129328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.129352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.710 #39 NEW cov: 12062 ft: 14584 corp: 22/425b lim: 40 exec/s: 39 rss: 72Mb L: 13/39 MS: 1 ChangeByte- 00:06:39.710 [2024-07-23 10:27:28.169665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff5bffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.169690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.710 [2024-07-23 10:27:28.169748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:0f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.169762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.710 [2024-07-23 10:27:28.169819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.169832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.710 #40 NEW cov: 12062 ft: 14604 corp: 23/449b lim: 40 exec/s: 40 rss: 72Mb L: 24/39 MS: 1 InsertByte- 00:06:39.710 [2024-07-23 10:27:28.209555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f000005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.710 [2024-07-23 10:27:28.209580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.970 #41 NEW cov: 12062 ft: 14634 corp: 24/462b lim: 40 exec/s: 41 rss: 72Mb L: 13/39 MS: 1 CMP- DE: "\005\000\000\000"- 00:06:39.970 [2024-07-23 10:27:28.259684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.259708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.970 #42 NEW cov: 12062 ft: 14644 corp: 25/477b lim: 40 exec/s: 42 rss: 72Mb L: 15/39 MS: 1 CopyPart- 00:06:39.970 [2024-07-23 10:27:28.299817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.299843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.970 #43 NEW cov: 12062 ft: 14652 corp: 26/492b lim: 40 exec/s: 43 rss: 73Mb L: 15/39 MS: 1 ChangeByte- 00:06:39.970 [2024-07-23 10:27:28.340165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.340190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.970 [2024-07-23 10:27:28.340245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff3b cdw11:0f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.340259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.970 [2024-07-23 10:27:28.340327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.340344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.970 #44 NEW cov: 12062 ft: 14662 corp: 27/516b lim: 40 exec/s: 44 rss: 73Mb L: 24/39 MS: 1 ChangeByte- 00:06:39.970 [2024-07-23 10:27:28.380386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:88888888 cdw11:88888888 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.380410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.970 [2024-07-23 10:27:28.380463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:88888888 cdw11:88888888 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.380476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.970 [2024-07-23 10:27:28.380530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:88880a0e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.380543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.970 [2024-07-23 10:27:28.380598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff28ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.380610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.970 #45 NEW cov: 12062 ft: 14670 corp: 28/550b lim: 40 exec/s: 45 rss: 73Mb L: 34/39 MS: 1 InsertRepeatedBytes- 00:06:39.970 [2024-07-23 10:27:28.430403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.430427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.970 [2024-07-23 10:27:28.430482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0a0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.430496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.970 [2024-07-23 10:27:28.430550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff6f00 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.970 [2024-07-23 10:27:28.430563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.970 #46 NEW cov: 12062 ft: 14693 corp: 29/577b lim: 40 exec/s: 46 rss: 73Mb L: 27/39 MS: 1 CMP- DE: "o\000"- 00:06:39.971 [2024-07-23 10:27:28.470526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.971 [2024-07-23 10:27:28.470550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.971 [2024-07-23 10:27:28.470606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0a0a28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.971 [2024-07-23 10:27:28.470620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.971 [2024-07-23 10:27:28.470676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffff6f cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.971 [2024-07-23 10:27:28.470689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.230 #47 NEW cov: 12062 ft: 14700 corp: 30/605b lim: 40 exec/s: 47 rss: 73Mb L: 28/39 MS: 1 InsertByte- 00:06:40.230 [2024-07-23 10:27:28.520566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0eff2e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.230 [2024-07-23 10:27:28.520591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.230 [2024-07-23 10:27:28.520645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.230 [2024-07-23 10:27:28.520658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.230 #48 NEW cov: 12062 ft: 14703 corp: 31/621b lim: 40 exec/s: 48 rss: 73Mb L: 16/39 MS: 1 CopyPart- 00:06:40.230 [2024-07-23 10:27:28.560537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f000005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.230 [2024-07-23 10:27:28.560561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.230 #49 NEW cov: 12062 ft: 14710 corp: 32/634b lim: 40 exec/s: 49 rss: 73Mb L: 13/39 MS: 1 ChangeByte- 00:06:40.230 [2024-07-23 10:27:28.610953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.230 [2024-07-23 10:27:28.610977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.230 [2024-07-23 10:27:28.611033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.231 [2024-07-23 10:27:28.611046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.231 [2024-07-23 10:27:28.611118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.231 [2024-07-23 10:27:28.611132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.231 #50 NEW cov: 12062 ft: 14712 corp: 33/663b lim: 40 exec/s: 50 rss: 73Mb L: 29/39 MS: 1 EraseBytes- 00:06:40.231 [2024-07-23 10:27:28.661198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.231 [2024-07-23 10:27:28.661223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.231 [2024-07-23 10:27:28.661280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.231 [2024-07-23 10:27:28.661294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.231 [2024-07-23 10:27:28.661348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000f700 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.231 [2024-07-23 10:27:28.661361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.231 [2024-07-23 10:27:28.661415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff0a0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.231 [2024-07-23 10:27:28.661429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.231 #56 NEW cov: 12062 ft: 14722 corp: 34/702b lim: 40 exec/s: 56 rss: 73Mb L: 39/39 MS: 1 ChangeBinInt- 00:06:40.231 [2024-07-23 10:27:28.700942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f00005e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.231 [2024-07-23 10:27:28.700969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.490 #57 NEW cov: 12062 ft: 14723 corp: 35/716b lim: 40 exec/s: 57 rss: 73Mb L: 14/39 MS: 1 InsertByte- 00:06:40.490 [2024-07-23 10:27:28.751103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f000005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.490 [2024-07-23 10:27:28.751126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.490 #58 NEW cov: 12062 ft: 14747 corp: 36/729b lim: 40 exec/s: 58 rss: 73Mb L: 13/39 MS: 1 PersAutoDict- DE: "\005\000\000\000"- 00:06:40.490 [2024-07-23 10:27:28.791333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.490 [2024-07-23 10:27:28.791357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.490 [2024-07-23 10:27:28.791411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.490 [2024-07-23 10:27:28.791424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.490 #63 NEW cov: 12062 ft: 14761 corp: 37/749b lim: 40 exec/s: 63 rss: 73Mb L: 20/39 MS: 5 ChangeBit-InsertByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:40.490 [2024-07-23 10:27:28.831591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.490 [2024-07-23 10:27:28.831616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.491 [2024-07-23 10:27:28.831667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff3b cdw11:0f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.491 [2024-07-23 10:27:28.831681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.491 [2024-07-23 10:27:28.831733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.491 [2024-07-23 10:27:28.831746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.491 #64 NEW cov: 12062 ft: 14773 corp: 38/773b lim: 40 exec/s: 64 rss: 73Mb L: 24/39 MS: 1 ShuffleBytes- 00:06:40.491 [2024-07-23 10:27:28.881454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0f004005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.491 [2024-07-23 10:27:28.881478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.491 #65 NEW cov: 12062 ft: 14787 corp: 39/786b lim: 40 exec/s: 65 rss: 73Mb L: 13/39 MS: 1 ChangeBit- 00:06:40.491 [2024-07-23 10:27:28.921587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.491 [2024-07-23 10:27:28.921613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.491 #66 NEW cov: 12062 ft: 14794 corp: 40/798b lim: 40 exec/s: 33 rss: 73Mb L: 12/39 MS: 1 EraseBytes- 00:06:40.491 #66 DONE cov: 12062 ft: 14794 corp: 40/798b lim: 40 exec/s: 33 rss: 73Mb 00:06:40.491 ###### Recommended dictionary. ###### 00:06:40.491 "\017\000\000\000\000\000\000\000" # Uses: 1 00:06:40.491 "\006\000" # Uses: 0 00:06:40.491 "\005\000\000\000" # Uses: 1 00:06:40.491 "o\000" # Uses: 0 00:06:40.491 ###### End of recommended dictionary. ###### 00:06:40.491 Done 66 runs in 2 second(s) 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:40.750 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:40.751 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.751 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.751 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.751 10:27:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:40.751 [2024-07-23 10:27:29.128997] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:40.751 [2024-07-23 10:27:29.129089] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430791 ] 00:06:40.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.010 [2024-07-23 10:27:29.390893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.010 [2024-07-23 10:27:29.421265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.010 [2024-07-23 10:27:29.473710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.010 [2024-07-23 10:27:29.490046] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:41.010 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.010 INFO: Seed: 1618682292 00:06:41.269 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:41.269 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:41.269 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:41.269 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.269 #2 INITED exec/s: 0 rss: 64Mb 00:06:41.269 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.269 This may also happen if the target rejected all inputs we tried so far 00:06:41.269 [2024-07-23 10:27:29.545416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.269 [2024-07-23 10:27:29.545449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.269 [2024-07-23 10:27:29.545506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.269 [2024-07-23 10:27:29.545524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.530 NEW_FUNC[1/692]: 0x4a7a80 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:41.530 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.530 #4 NEW cov: 11812 ft: 11813 corp: 2/21b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:41.530 [2024-07-23 10:27:29.866584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.866637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.530 [2024-07-23 10:27:29.866706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.866730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.530 [2024-07-23 10:27:29.866805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.866827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.530 [2024-07-23 10:27:29.866898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.866917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.530 #5 NEW cov: 11949 ft: 12865 corp: 3/53b lim: 35 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:41.530 [2024-07-23 10:27:29.916229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.916257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.530 [2024-07-23 10:27:29.916314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.916328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.530 #6 NEW cov: 11955 ft: 13114 corp: 4/72b lim: 35 exec/s: 0 rss: 70Mb L: 19/32 MS: 1 EraseBytes- 00:06:41.530 [2024-07-23 10:27:29.956366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.956392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.530 [2024-07-23 10:27:29.956449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:29.956463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.530 #12 NEW cov: 12040 ft: 13353 corp: 5/91b lim: 35 exec/s: 0 rss: 70Mb L: 19/32 MS: 1 ChangeBinInt- 00:06:41.530 [2024-07-23 10:27:30.006492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:30.006519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.530 [2024-07-23 10:27:30.006577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.530 [2024-07-23 10:27:30.006592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.790 #13 NEW cov: 12040 ft: 13437 corp: 6/110b lim: 35 exec/s: 0 rss: 71Mb L: 19/32 MS: 1 ChangeBinInt- 00:06:41.790 [2024-07-23 10:27:30.056999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.790 [2024-07-23 10:27:30.057029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.790 [2024-07-23 10:27:30.057086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.790 [2024-07-23 10:27:30.057101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.790 [2024-07-23 10:27:30.057158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.790 [2024-07-23 10:27:30.057173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.790 [2024-07-23 10:27:30.057231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.790 [2024-07-23 10:27:30.057245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.790 #14 NEW cov: 12040 ft: 13512 corp: 7/143b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 CopyPart- 00:06:41.790 [2024-07-23 10:27:30.106814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.790 [2024-07-23 10:27:30.106841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.790 [2024-07-23 10:27:30.106899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.790 [2024-07-23 10:27:30.106914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.790 #15 NEW cov: 12040 ft: 13629 corp: 8/163b lim: 35 exec/s: 0 rss: 71Mb L: 20/33 MS: 1 CopyPart- 00:06:41.791 [2024-07-23 10:27:30.146925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.146951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.791 [2024-07-23 10:27:30.147027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.147041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.791 #16 NEW cov: 12040 ft: 13653 corp: 9/183b lim: 35 exec/s: 0 rss: 71Mb L: 20/33 MS: 1 ChangeByte- 00:06:41.791 [2024-07-23 10:27:30.197228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.197254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.791 [2024-07-23 10:27:30.197313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.197327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.791 [2024-07-23 10:27:30.197383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.197397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.791 #17 NEW cov: 12040 ft: 13841 corp: 10/210b lim: 35 exec/s: 0 rss: 71Mb L: 27/33 MS: 1 CopyPart- 00:06:41.791 [2024-07-23 10:27:30.237468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.237498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.791 [2024-07-23 10:27:30.237556] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.237571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.791 [2024-07-23 10:27:30.237627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.237641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.791 [2024-07-23 10:27:30.237698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.237712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.791 #18 NEW cov: 12040 ft: 13882 corp: 11/243b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeBinInt- 00:06:41.791 [2024-07-23 10:27:30.287360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.791 [2024-07-23 10:27:30.287387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.050 NEW_FUNC[1/2]: 0x4c8f40 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:42.050 NEW_FUNC[2/2]: 0x11fcd90 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1759 00:06:42.050 #21 NEW cov: 12073 ft: 14063 corp: 12/259b lim: 35 exec/s: 0 rss: 72Mb L: 16/33 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:06:42.050 [2024-07-23 10:27:30.327456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.050 [2024-07-23 10:27:30.327484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.050 [2024-07-23 10:27:30.327542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.050 [2024-07-23 10:27:30.327556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.050 #22 NEW cov: 12073 ft: 14105 corp: 13/277b lim: 35 exec/s: 0 rss: 72Mb L: 18/33 MS: 1 EraseBytes- 00:06:42.050 [2024-07-23 10:27:30.367811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.050 [2024-07-23 10:27:30.367838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.050 [2024-07-23 10:27:30.367895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.050 [2024-07-23 10:27:30.367909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.050 [2024-07-23 10:27:30.367966] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.367979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.368033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.368047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.051 #23 NEW cov: 12073 ft: 14189 corp: 14/311b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:06:42.051 [2024-07-23 10:27:30.418023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.418050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.418125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.418140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.418202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.418216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.418275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.418289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.051 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:42.051 #24 NEW cov: 12096 ft: 14222 corp: 15/344b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBit- 00:06:42.051 [2024-07-23 10:27:30.458105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.458132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.458189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.458204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.458261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.458274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.458330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.458344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.051 #25 NEW cov: 12096 ft: 14244 corp: 16/377b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeByte- 00:06:42.051 [2024-07-23 10:27:30.497922] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.497948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.498005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.498019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.051 #26 NEW cov: 12096 ft: 14277 corp: 17/397b lim: 35 exec/s: 26 rss: 72Mb L: 20/34 MS: 1 ChangeBinInt- 00:06:42.051 [2024-07-23 10:27:30.548046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.548073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.051 [2024-07-23 10:27:30.548130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.051 [2024-07-23 10:27:30.548148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.311 #27 NEW cov: 12096 ft: 14320 corp: 18/416b lim: 35 exec/s: 27 rss: 72Mb L: 19/34 MS: 1 CMP- DE: "\377\377"- 00:06:42.311 [2024-07-23 10:27:30.588174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.588202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.588273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.588287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.311 #28 NEW cov: 12096 ft: 14350 corp: 19/436b lim: 35 exec/s: 28 rss: 72Mb L: 20/34 MS: 1 ShuffleBytes- 00:06:42.311 [2024-07-23 10:27:30.638617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.638643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.638702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.638716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.638773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.638793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.638848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.638862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.311 #29 NEW cov: 12096 ft: 14383 corp: 20/470b lim: 35 exec/s: 29 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:06:42.311 [2024-07-23 10:27:30.688754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.688787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.688863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.688877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.688933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.688947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.689005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.689018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.311 #30 NEW cov: 12096 ft: 14403 corp: 21/503b lim: 35 exec/s: 30 rss: 72Mb L: 33/34 MS: 1 ChangeByte- 00:06:42.311 [2024-07-23 10:27:30.728853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.728881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.728944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.728959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.729018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.729031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.729088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.729101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.311 #31 NEW cov: 12096 ft: 14444 corp: 22/537b lim: 35 exec/s: 31 rss: 72Mb L: 34/34 MS: 1 InsertByte- 00:06:42.311 [2024-07-23 10:27:30.768952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.768977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.769033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.769049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.769102] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.769118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.311 [2024-07-23 10:27:30.769175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-07-23 10:27:30.769188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.311 #32 NEW cov: 12096 ft: 14501 corp: 23/569b lim: 35 exec/s: 32 rss: 72Mb L: 32/34 MS: 1 PersAutoDict- DE: "\377\377"- 00:06:42.572 [2024-07-23 10:27:30.818815] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.818840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.572 #33 NEW cov: 12096 ft: 14550 corp: 24/585b lim: 35 exec/s: 33 rss: 72Mb L: 16/34 MS: 1 ChangeBit- 00:06:42.572 [2024-07-23 10:27:30.868950] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.868975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.572 #34 NEW cov: 12096 ft: 14568 corp: 25/602b lim: 35 exec/s: 34 rss: 72Mb L: 17/34 MS: 1 InsertByte- 00:06:42.572 [2024-07-23 10:27:30.919363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.919389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:30.919465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.919479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:30.919536] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.919552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:30.919609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.919623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.572 #35 NEW cov: 12096 ft: 14616 corp: 26/630b lim: 35 exec/s: 35 rss: 72Mb L: 28/34 MS: 1 CrossOver- 00:06:42.572 [2024-07-23 10:27:30.969189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.969214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:30.969288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:30.969302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.572 #36 NEW cov: 12096 ft: 14627 corp: 27/650b lim: 35 exec/s: 36 rss: 72Mb L: 20/34 MS: 1 ShuffleBytes- 00:06:42.572 [2024-07-23 10:27:31.009569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.009594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:31.009651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.009667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:31.009723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.009738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:31.009793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.009807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.572 #37 NEW cov: 12096 ft: 14637 corp: 28/682b lim: 35 exec/s: 37 rss: 72Mb L: 32/34 MS: 1 ChangeBit- 00:06:42.572 [2024-07-23 10:27:31.059716] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.059742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:31.059797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.059812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:31.059866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.059880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.572 [2024-07-23 10:27:31.059935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.572 [2024-07-23 10:27:31.059948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.832 #38 NEW cov: 12096 ft: 14639 corp: 29/710b lim: 35 exec/s: 38 rss: 73Mb L: 28/34 MS: 1 ChangeByte- 00:06:42.832 [2024-07-23 10:27:31.109718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.832 [2024-07-23 10:27:31.109743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.109804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.109819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.109875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.109890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.833 #39 NEW cov: 12096 ft: 14652 corp: 30/735b lim: 35 exec/s: 39 rss: 73Mb L: 25/34 MS: 1 EraseBytes- 00:06:42.833 [2024-07-23 10:27:31.149689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.149714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.149770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.149789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.833 #40 NEW cov: 12096 ft: 14681 corp: 31/755b lim: 35 exec/s: 40 rss: 73Mb L: 20/34 MS: 1 ChangeByte- 00:06:42.833 [2024-07-23 10:27:31.199798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.199823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.199882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.199896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.230195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.230219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.230281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.230295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.230364] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:6 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.230380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.230440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.230453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.833 NEW_FUNC[1/1]: 0x4c7910 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:06:42.833 #42 NEW cov: 12120 ft: 14712 corp: 32/783b lim: 35 exec/s: 42 rss: 73Mb L: 28/34 MS: 2 ChangeBit-CMP- DE: "\001\000\000\000\000\000\000\017"- 00:06:42.833 [2024-07-23 10:27:31.270319] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.270347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.270404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.270420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.270477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.270491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.833 [2024-07-23 10:27:31.270546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.270560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.833 #43 NEW cov: 12120 ft: 14786 corp: 33/811b lim: 35 exec/s: 43 rss: 73Mb L: 28/34 MS: 1 EraseBytes- 00:06:42.833 [2024-07-23 10:27:31.310114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000026 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.833 [2024-07-23 10:27:31.310140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.093 #44 NEW cov: 12120 ft: 14789 corp: 34/828b lim: 35 exec/s: 44 rss: 73Mb L: 17/34 MS: 1 ShuffleBytes- 00:06:43.093 [2024-07-23 10:27:31.360249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.093 [2024-07-23 10:27:31.360275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.093 [2024-07-23 10:27:31.360335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.093 [2024-07-23 10:27:31.360349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.093 #45 NEW cov: 12120 ft: 14801 corp: 35/847b lim: 35 exec/s: 45 rss: 73Mb L: 19/34 MS: 1 ChangeBit- 00:06:43.093 [2024-07-23 10:27:31.400511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.400536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.094 [2024-07-23 10:27:31.400594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.400607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.094 [2024-07-23 10:27:31.400667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.400682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.094 #46 NEW cov: 12120 ft: 14807 corp: 36/872b lim: 35 exec/s: 46 rss: 73Mb L: 25/34 MS: 1 CopyPart- 00:06:43.094 [2024-07-23 10:27:31.450549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.450574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.094 #47 NEW cov: 12120 ft: 14825 corp: 37/892b lim: 35 exec/s: 47 rss: 73Mb L: 20/34 MS: 1 CrossOver- 00:06:43.094 [2024-07-23 10:27:31.490625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.490652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.094 [2024-07-23 10:27:31.490712] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.490725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.094 #48 NEW cov: 12120 ft: 14835 corp: 38/912b lim: 35 exec/s: 48 rss: 73Mb L: 20/34 MS: 1 ChangeByte- 00:06:43.094 [2024-07-23 10:27:31.531086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.531111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.094 [2024-07-23 10:27:31.531170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.531185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.094 [2024-07-23 10:27:31.531240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.531256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.094 [2024-07-23 10:27:31.531314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.094 [2024-07-23 10:27:31.531331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.094 #49 NEW cov: 12120 ft: 14871 corp: 39/946b lim: 35 exec/s: 24 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:43.094 #49 DONE cov: 12120 ft: 14871 corp: 39/946b lim: 35 exec/s: 24 rss: 73Mb 00:06:43.094 ###### Recommended dictionary. ###### 00:06:43.094 "\377\377" # Uses: 1 00:06:43.094 "\001\000\000\000\000\000\000\017" # Uses: 0 00:06:43.094 ###### End of recommended dictionary. ###### 00:06:43.094 Done 49 runs in 2 second(s) 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.354 10:27:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:43.354 [2024-07-23 10:27:31.727038] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:43.355 [2024-07-23 10:27:31.727115] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431117 ] 00:06:43.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.614 [2024-07-23 10:27:32.035925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.614 [2024-07-23 10:27:32.064593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.874 [2024-07-23 10:27:32.117127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.874 [2024-07-23 10:27:32.133417] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:43.874 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.874 INFO: Seed: 4262672596 00:06:43.874 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:43.874 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:43.874 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:43.874 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.874 #2 INITED exec/s: 0 rss: 64Mb 00:06:43.874 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.874 This may also happen if the target rejected all inputs we tried so far 00:06:43.874 [2024-07-23 10:27:32.188914] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.874 [2024-07-23 10:27:32.188945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.134 NEW_FUNC[1/692]: 0x4a8fc0 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:44.134 NEW_FUNC[2/692]: 0x4c3780 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:06:44.134 #26 NEW cov: 11823 ft: 11824 corp: 2/21b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 4 ShuffleBytes-CrossOver-ChangeBit-InsertRepeatedBytes- 00:06:44.134 [2024-07-23 10:27:32.541430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.134 [2024-07-23 10:27:32.541483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.134 #27 NEW cov: 11953 ft: 12377 corp: 3/41b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:44.134 [2024-07-23 10:27:32.601413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.134 [2024-07-23 10:27:32.601439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.134 #28 NEW cov: 11959 ft: 12617 corp: 4/61b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:44.394 [2024-07-23 10:27:32.651556] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.394 [2024-07-23 10:27:32.651583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.394 #34 NEW cov: 12044 ft: 12918 corp: 5/81b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeByte- 00:06:44.394 [2024-07-23 10:27:32.711760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.394 [2024-07-23 10:27:32.711793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.394 #35 NEW cov: 12044 ft: 13059 corp: 6/101b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeByte- 00:06:44.394 [2024-07-23 10:27:32.762200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.394 [2024-07-23 10:27:32.762225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.394 [2024-07-23 10:27:32.762318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.394 [2024-07-23 10:27:32.762334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.394 #36 NEW cov: 12044 ft: 13333 corp: 7/122b lim: 35 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 InsertByte- 00:06:44.394 #37 NEW cov: 12044 ft: 13616 corp: 8/132b lim: 35 exec/s: 0 rss: 70Mb L: 10/21 MS: 1 EraseBytes- 00:06:44.394 [2024-07-23 10:27:32.882609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.394 [2024-07-23 10:27:32.882637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.394 [2024-07-23 10:27:32.882738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.394 [2024-07-23 10:27:32.882764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.653 #38 NEW cov: 12044 ft: 13674 corp: 9/154b lim: 35 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 InsertByte- 00:06:44.653 [2024-07-23 10:27:32.942528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000063b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:32.942552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.653 #39 NEW cov: 12044 ft: 13697 corp: 10/174b lim: 35 exec/s: 0 rss: 71Mb L: 20/22 MS: 1 ChangeByte- 00:06:44.653 [2024-07-23 10:27:32.993004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:32.993042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.653 [2024-07-23 10:27:32.993132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:32.993148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.653 [2024-07-23 10:27:32.993252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:32.993268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.653 #40 NEW cov: 12044 ft: 13859 corp: 11/196b lim: 35 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 ShuffleBytes- 00:06:44.653 [2024-07-23 10:27:33.053215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:33.053242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.653 [2024-07-23 10:27:33.053339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:33.053355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.653 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:44.653 #42 NEW cov: 12067 ft: 13914 corp: 12/220b lim: 35 exec/s: 0 rss: 71Mb L: 24/24 MS: 2 CrossOver-CrossOver- 00:06:44.653 [2024-07-23 10:27:33.103489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:33.103518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.653 [2024-07-23 10:27:33.103622] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:33.103638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.653 #43 NEW cov: 12067 ft: 14000 corp: 13/241b lim: 35 exec/s: 0 rss: 71Mb L: 21/24 MS: 1 InsertByte- 00:06:44.653 [2024-07-23 10:27:33.153418] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000063b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.653 [2024-07-23 10:27:33.153447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.913 #44 NEW cov: 12067 ft: 14060 corp: 14/261b lim: 35 exec/s: 44 rss: 71Mb L: 20/24 MS: 1 ChangeBinInt- 00:06:44.913 [2024-07-23 10:27:33.223978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.224006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.913 [2024-07-23 10:27:33.224098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.224116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.913 [2024-07-23 10:27:33.224202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.224217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.913 [2024-07-23 10:27:33.224315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000000d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.224330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.913 #45 NEW cov: 12067 ft: 14494 corp: 15/295b lim: 35 exec/s: 45 rss: 71Mb L: 34/34 MS: 1 CrossOver- 00:06:44.913 [2024-07-23 10:27:33.293807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.293837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.913 #46 NEW cov: 12067 ft: 14569 corp: 16/315b lim: 35 exec/s: 46 rss: 71Mb L: 20/34 MS: 1 ChangeBinInt- 00:06:44.913 [2024-07-23 10:27:33.344602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.344629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.913 [2024-07-23 10:27:33.344726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.344743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.913 [2024-07-23 10:27:33.344844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.344861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.913 #47 NEW cov: 12067 ft: 14617 corp: 17/349b lim: 35 exec/s: 47 rss: 71Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:44.913 [2024-07-23 10:27:33.394551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.394577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.913 [2024-07-23 10:27:33.394672] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-07-23 10:27:33.394688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.173 #48 NEW cov: 12067 ft: 14639 corp: 18/376b lim: 35 exec/s: 48 rss: 71Mb L: 27/34 MS: 1 CopyPart- 00:06:45.173 [2024-07-23 10:27:33.445040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.445068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.173 [2024-07-23 10:27:33.445176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.445204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.173 [2024-07-23 10:27:33.445309] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.445324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.173 [2024-07-23 10:27:33.445421] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000000d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.445438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.173 [2024-07-23 10:27:33.445531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.445547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.173 #49 NEW cov: 12067 ft: 14773 corp: 19/411b lim: 35 exec/s: 49 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:06:45.173 [2024-07-23 10:27:33.514988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.515015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.173 [2024-07-23 10:27:33.515122] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.515138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.173 [2024-07-23 10:27:33.515339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.515356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.173 #50 NEW cov: 12067 ft: 14783 corp: 20/445b lim: 35 exec/s: 50 rss: 71Mb L: 34/35 MS: 1 CopyPart- 00:06:45.173 [2024-07-23 10:27:33.564609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.564637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.173 #51 NEW cov: 12067 ft: 14870 corp: 21/465b lim: 35 exec/s: 51 rss: 72Mb L: 20/35 MS: 1 CopyPart- 00:06:45.173 [2024-07-23 10:27:33.625098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.625127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.173 [2024-07-23 10:27:33.625220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.173 [2024-07-23 10:27:33.625236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.173 #52 NEW cov: 12067 ft: 14878 corp: 22/492b lim: 35 exec/s: 52 rss: 72Mb L: 27/35 MS: 1 ChangeByte- 00:06:45.433 [2024-07-23 10:27:33.684913] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.684939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.433 [2024-07-23 10:27:33.685035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.685051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.433 #53 NEW cov: 12067 ft: 14897 corp: 23/512b lim: 35 exec/s: 53 rss: 72Mb L: 20/35 MS: 1 ShuffleBytes- 00:06:45.433 [2024-07-23 10:27:33.745518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.745546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.433 [2024-07-23 10:27:33.745644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.745659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.433 #54 NEW cov: 12067 ft: 15015 corp: 24/536b lim: 35 exec/s: 54 rss: 72Mb L: 24/35 MS: 1 ShuffleBytes- 00:06:45.433 [2024-07-23 10:27:33.805389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.805415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.433 #55 NEW cov: 12067 ft: 15030 corp: 25/556b lim: 35 exec/s: 55 rss: 72Mb L: 20/35 MS: 1 ChangeBinInt- 00:06:45.433 [2024-07-23 10:27:33.855587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.855613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.433 #56 NEW cov: 12067 ft: 15042 corp: 26/576b lim: 35 exec/s: 56 rss: 72Mb L: 20/35 MS: 1 CopyPart- 00:06:45.433 [2024-07-23 10:27:33.906056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.906081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.433 [2024-07-23 10:27:33.906168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-07-23 10:27:33.906185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.433 #57 NEW cov: 12067 ft: 15052 corp: 27/603b lim: 35 exec/s: 57 rss: 72Mb L: 27/35 MS: 1 ChangeBinInt- 00:06:45.693 [2024-07-23 10:27:33.955978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.693 [2024-07-23 10:27:33.956007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.693 #58 NEW cov: 12067 ft: 15079 corp: 28/623b lim: 35 exec/s: 58 rss: 72Mb L: 20/35 MS: 1 ShuffleBytes- 00:06:45.693 [2024-07-23 10:27:34.006229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.693 [2024-07-23 10:27:34.006256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.693 #59 NEW cov: 12067 ft: 15096 corp: 29/643b lim: 35 exec/s: 59 rss: 72Mb L: 20/35 MS: 1 ShuffleBytes- 00:06:45.693 [2024-07-23 10:27:34.056613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.693 [2024-07-23 10:27:34.056640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.693 [2024-07-23 10:27:34.056727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.693 [2024-07-23 10:27:34.056743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.693 #60 NEW cov: 12067 ft: 15131 corp: 30/664b lim: 35 exec/s: 60 rss: 72Mb L: 21/35 MS: 1 InsertByte- 00:06:45.693 [2024-07-23 10:27:34.116544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000063b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.693 [2024-07-23 10:27:34.116570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.693 #61 NEW cov: 12067 ft: 15143 corp: 31/684b lim: 35 exec/s: 61 rss: 72Mb L: 20/35 MS: 1 CopyPart- 00:06:45.693 [2024-07-23 10:27:34.166941] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.693 [2024-07-23 10:27:34.166967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.693 [2024-07-23 10:27:34.167067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.693 [2024-07-23 10:27:34.167082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.953 #62 NEW cov: 12067 ft: 15148 corp: 32/710b lim: 35 exec/s: 31 rss: 72Mb L: 26/35 MS: 1 InsertRepeatedBytes- 00:06:45.953 #62 DONE cov: 12067 ft: 15148 corp: 32/710b lim: 35 exec/s: 31 rss: 72Mb 00:06:45.953 Done 62 runs in 2 second(s) 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.953 10:27:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:45.953 [2024-07-23 10:27:34.362101] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:45.953 [2024-07-23 10:27:34.362174] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431484 ] 00:06:45.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.213 [2024-07-23 10:27:34.624748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.213 [2024-07-23 10:27:34.651459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.213 [2024-07-23 10:27:34.703993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.473 [2024-07-23 10:27:34.720324] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:46.473 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.473 INFO: Seed: 2554708230 00:06:46.473 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:46.473 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:46.473 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:46.473 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.473 #2 INITED exec/s: 0 rss: 63Mb 00:06:46.473 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.473 This may also happen if the target rejected all inputs we tried so far 00:06:46.473 [2024-07-23 10:27:34.775630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.473 [2024-07-23 10:27:34.775664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.473 [2024-07-23 10:27:34.775724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.473 [2024-07-23 10:27:34.775742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.473 [2024-07-23 10:27:34.775804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.473 [2024-07-23 10:27:34.775822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.732 NEW_FUNC[1/692]: 0x4aa470 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:46.732 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:46.732 #4 NEW cov: 11904 ft: 11905 corp: 2/76b lim: 105 exec/s: 0 rss: 70Mb L: 75/75 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:46.732 [2024-07-23 10:27:35.096294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.096344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.733 [2024-07-23 10:27:35.096411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.096428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.733 #10 NEW cov: 12034 ft: 12907 corp: 3/137b lim: 105 exec/s: 0 rss: 70Mb L: 61/75 MS: 1 CrossOver- 00:06:46.733 [2024-07-23 10:27:35.136379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.136409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.733 [2024-07-23 10:27:35.136444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.136461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.733 [2024-07-23 10:27:35.136515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.136529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.733 #11 NEW cov: 12040 ft: 13110 corp: 4/212b lim: 105 exec/s: 0 rss: 71Mb L: 75/75 MS: 1 ShuffleBytes- 00:06:46.733 [2024-07-23 10:27:35.186396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.186423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.733 [2024-07-23 10:27:35.186458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.186473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.733 #12 NEW cov: 12125 ft: 13476 corp: 5/274b lim: 105 exec/s: 0 rss: 71Mb L: 62/75 MS: 1 CrossOver- 00:06:46.733 [2024-07-23 10:27:35.226526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.226553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.733 [2024-07-23 10:27:35.226591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.733 [2024-07-23 10:27:35.226606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.993 #13 NEW cov: 12125 ft: 13534 corp: 6/327b lim: 105 exec/s: 0 rss: 71Mb L: 53/75 MS: 1 CrossOver- 00:06:46.993 [2024-07-23 10:27:35.276599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.276625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.276666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.276682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.993 #14 NEW cov: 12125 ft: 13607 corp: 7/388b lim: 105 exec/s: 0 rss: 71Mb L: 61/75 MS: 1 ShuffleBytes- 00:06:46.993 [2024-07-23 10:27:35.316909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.316936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.316982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.316998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.317052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.317068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.993 #15 NEW cov: 12125 ft: 13768 corp: 8/465b lim: 105 exec/s: 0 rss: 71Mb L: 77/77 MS: 1 CrossOver- 00:06:46.993 [2024-07-23 10:27:35.367016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.367042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.367086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.367103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.367174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:562949953421312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.367191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.993 #16 NEW cov: 12125 ft: 13818 corp: 9/540b lim: 105 exec/s: 0 rss: 71Mb L: 75/77 MS: 1 ChangeBit- 00:06:46.993 [2024-07-23 10:27:35.417144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.417171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.417215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.417231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.417286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:562949953421312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.417302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.993 #17 NEW cov: 12125 ft: 13934 corp: 10/615b lim: 105 exec/s: 0 rss: 72Mb L: 75/77 MS: 1 ShuffleBytes- 00:06:46.993 [2024-07-23 10:27:35.467283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.467310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.467356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.467372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.993 [2024-07-23 10:27:35.467424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:562949953421312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.993 [2024-07-23 10:27:35.467441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.253 #18 NEW cov: 12125 ft: 13973 corp: 11/690b lim: 105 exec/s: 0 rss: 72Mb L: 75/77 MS: 1 ChangeByte- 00:06:47.253 [2024-07-23 10:27:35.517554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.517581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.517634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.517652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.517704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.517719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.517769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7595718147998050665 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.517792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.253 #19 NEW cov: 12125 ft: 14492 corp: 12/787b lim: 105 exec/s: 0 rss: 72Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:06:47.253 [2024-07-23 10:27:35.557541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.557568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.557604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.557620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.557673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.557689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.253 #20 NEW cov: 12125 ft: 14494 corp: 13/864b lim: 105 exec/s: 0 rss: 72Mb L: 77/97 MS: 1 ShuffleBytes- 00:06:47.253 [2024-07-23 10:27:35.607845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.607875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.607920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.607936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.607991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.608008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.253 [2024-07-23 10:27:35.608062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7595718146229534720 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.253 [2024-07-23 10:27:35.608080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.253 #21 NEW cov: 12125 ft: 14532 corp: 14/966b lim: 105 exec/s: 0 rss: 72Mb L: 102/102 MS: 1 InsertRepeatedBytes- 00:06:47.254 [2024-07-23 10:27:35.657820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.657847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.254 [2024-07-23 10:27:35.657910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.657929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.254 [2024-07-23 10:27:35.657982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.657999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.254 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:47.254 #22 NEW cov: 12148 ft: 14630 corp: 15/1041b lim: 105 exec/s: 0 rss: 72Mb L: 75/102 MS: 1 ShuffleBytes- 00:06:47.254 [2024-07-23 10:27:35.697919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.697946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.254 [2024-07-23 10:27:35.697988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.698005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.254 [2024-07-23 10:27:35.698055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.698070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.254 #23 NEW cov: 12148 ft: 14644 corp: 16/1118b lim: 105 exec/s: 0 rss: 72Mb L: 77/102 MS: 1 ChangeBit- 00:06:47.254 [2024-07-23 10:27:35.748191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.748218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.254 [2024-07-23 10:27:35.748296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.748312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.254 [2024-07-23 10:27:35.748363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.748378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.254 [2024-07-23 10:27:35.748429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7566047824953999360 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.254 [2024-07-23 10:27:35.748446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.514 #24 NEW cov: 12148 ft: 14661 corp: 17/1222b lim: 105 exec/s: 24 rss: 72Mb L: 104/104 MS: 1 CrossOver- 00:06:47.514 [2024-07-23 10:27:35.798435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.798463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.798518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.798534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.798586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.798605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.798654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7566047824953999360 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.798669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.798722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:7595718147998050665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.798738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.514 #30 NEW cov: 12148 ft: 14731 corp: 18/1327b lim: 105 exec/s: 30 rss: 72Mb L: 105/105 MS: 1 InsertByte- 00:06:47.514 [2024-07-23 10:27:35.848585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.848612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.848665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.848680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.848729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.848744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.848794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7566047824953999360 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.848808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.848859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:7595718147998050665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.848875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.514 #31 NEW cov: 12148 ft: 14759 corp: 19/1432b lim: 105 exec/s: 31 rss: 73Mb L: 105/105 MS: 1 ShuffleBytes- 00:06:47.514 [2024-07-23 10:27:35.898486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.898513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.898554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.898571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.898623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.898639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.514 #32 NEW cov: 12148 ft: 14798 corp: 20/1507b lim: 105 exec/s: 32 rss: 73Mb L: 75/105 MS: 1 ChangeBinInt- 00:06:47.514 [2024-07-23 10:27:35.938605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.938631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.938671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.938687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.938738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.938769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.514 #34 NEW cov: 12148 ft: 14806 corp: 21/1583b lim: 105 exec/s: 34 rss: 73Mb L: 76/105 MS: 2 CrossOver-CrossOver- 00:06:47.514 [2024-07-23 10:27:35.978922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.978950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.979003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14073748835532800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.979019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.979068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.979082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.979132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7566047824953999360 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.979145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.514 [2024-07-23 10:27:35.979194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:7595718147998050665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.514 [2024-07-23 10:27:35.979209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.773 #35 NEW cov: 12148 ft: 14820 corp: 22/1688b lim: 105 exec/s: 35 rss: 73Mb L: 105/105 MS: 1 ChangeByte- 00:06:47.773 [2024-07-23 10:27:36.028874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.773 [2024-07-23 10:27:36.028901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.773 [2024-07-23 10:27:36.028945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.773 [2024-07-23 10:27:36.028961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.029012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2199023255552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.029027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.774 #36 NEW cov: 12148 ft: 14852 corp: 23/1764b lim: 105 exec/s: 36 rss: 73Mb L: 76/105 MS: 1 InsertByte- 00:06:47.774 [2024-07-23 10:27:36.068957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.068986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.069023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.069053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.069105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2199023255552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.069121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.774 #37 NEW cov: 12148 ft: 14860 corp: 24/1840b lim: 105 exec/s: 37 rss: 73Mb L: 76/105 MS: 1 ShuffleBytes- 00:06:47.774 [2024-07-23 10:27:36.119206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.119234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.119284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.119300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.119352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.119368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.119422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7566047824953999360 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.119438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.774 #38 NEW cov: 12148 ft: 14872 corp: 25/1944b lim: 105 exec/s: 38 rss: 73Mb L: 104/105 MS: 1 ShuffleBytes- 00:06:47.774 [2024-07-23 10:27:36.159109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.159136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.159174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.159190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.774 #39 NEW cov: 12148 ft: 14915 corp: 26/2006b lim: 105 exec/s: 39 rss: 73Mb L: 62/105 MS: 1 ChangeByte- 00:06:47.774 [2024-07-23 10:27:36.209235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.209262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.209298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.209312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.774 #40 NEW cov: 12148 ft: 14957 corp: 27/2067b lim: 105 exec/s: 40 rss: 73Mb L: 61/105 MS: 1 ChangeBit- 00:06:47.774 [2024-07-23 10:27:36.249458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.249484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.249533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.249550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.774 [2024-07-23 10:27:36.249602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2199023255552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.774 [2024-07-23 10:27:36.249617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.774 #41 NEW cov: 12148 ft: 15010 corp: 28/2143b lim: 105 exec/s: 41 rss: 73Mb L: 76/105 MS: 1 CopyPart- 00:06:48.033 [2024-07-23 10:27:36.289464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.289491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.289545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.289561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.033 #42 NEW cov: 12148 ft: 15054 corp: 29/2205b lim: 105 exec/s: 42 rss: 73Mb L: 62/105 MS: 1 ShuffleBytes- 00:06:48.033 [2024-07-23 10:27:36.329648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.329674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.329713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.329728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.329788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2199023255552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.329803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.033 #43 NEW cov: 12148 ft: 15065 corp: 30/2268b lim: 105 exec/s: 43 rss: 73Mb L: 63/105 MS: 1 CrossOver- 00:06:48.033 [2024-07-23 10:27:36.369858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.369885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.369937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.369953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.370004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.370018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.370071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:1095216660480 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.370088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.033 #44 NEW cov: 12148 ft: 15080 corp: 31/2357b lim: 105 exec/s: 44 rss: 73Mb L: 89/105 MS: 1 InsertRepeatedBytes- 00:06:48.033 [2024-07-23 10:27:36.420028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2922995106313142416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.420057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.420102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.420118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.420166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.420182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.420234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7566047824953999360 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.420248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.033 #45 NEW cov: 12148 ft: 15094 corp: 32/2461b lim: 105 exec/s: 45 rss: 73Mb L: 104/105 MS: 1 ChangeByte- 00:06:48.033 [2024-07-23 10:27:36.460145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:48385 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.460172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.460232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.460248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.460304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.460319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.460375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7595718147998050665 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.460392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.033 #46 NEW cov: 12148 ft: 15101 corp: 33/2559b lim: 105 exec/s: 46 rss: 73Mb L: 98/105 MS: 1 InsertByte- 00:06:48.033 [2024-07-23 10:27:36.500018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.500045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.033 [2024-07-23 10:27:36.500082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.033 [2024-07-23 10:27:36.500098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.033 #47 NEW cov: 12148 ft: 15119 corp: 34/2611b lim: 105 exec/s: 47 rss: 73Mb L: 52/105 MS: 1 EraseBytes- 00:06:48.292 [2024-07-23 10:27:36.540332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.540359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.540412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.540427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.540483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.540499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.540552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7595718146229534720 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.540569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.292 #48 NEW cov: 12148 ft: 15131 corp: 35/2707b lim: 105 exec/s: 48 rss: 73Mb L: 96/105 MS: 1 EraseBytes- 00:06:48.292 [2024-07-23 10:27:36.590410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.590436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.590483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:92358976733184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.590500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.590553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:562949953421312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.590569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.292 #49 NEW cov: 12148 ft: 15165 corp: 36/2782b lim: 105 exec/s: 49 rss: 73Mb L: 75/105 MS: 1 ChangeByte- 00:06:48.292 [2024-07-23 10:27:36.640547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:151797282545598464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.640575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.640611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.640625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.640679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.640693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.292 #50 NEW cov: 12148 ft: 15176 corp: 37/2852b lim: 105 exec/s: 50 rss: 73Mb L: 70/105 MS: 1 CMP- DE: "\000\000\000\000\002\033J\323"- 00:06:48.292 [2024-07-23 10:27:36.680724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10416984886257647760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.680752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.680798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.680815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.680868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.680885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.680942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7595718146229534720 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.680959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.292 #51 NEW cov: 12148 ft: 15177 corp: 38/2954b lim: 105 exec/s: 51 rss: 73Mb L: 102/105 MS: 1 CrossOver- 00:06:48.292 [2024-07-23 10:27:36.720742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.292 [2024-07-23 10:27:36.720769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.292 [2024-07-23 10:27:36.720813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.293 [2024-07-23 10:27:36.720828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.293 [2024-07-23 10:27:36.720896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.293 [2024-07-23 10:27:36.720914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.293 #52 NEW cov: 12148 ft: 15184 corp: 39/3030b lim: 105 exec/s: 26 rss: 74Mb L: 76/105 MS: 1 ShuffleBytes- 00:06:48.293 #52 DONE cov: 12148 ft: 15184 corp: 39/3030b lim: 105 exec/s: 26 rss: 74Mb 00:06:48.293 ###### Recommended dictionary. ###### 00:06:48.293 "\000\000\000\000\002\033J\323" # Uses: 0 00:06:48.293 ###### End of recommended dictionary. ###### 00:06:48.293 Done 52 runs in 2 second(s) 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.552 10:27:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:48.552 [2024-07-23 10:27:36.926436] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:48.552 [2024-07-23 10:27:36.926530] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431856 ] 00:06:48.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.810 [2024-07-23 10:27:37.229759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.810 [2024-07-23 10:27:37.261079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.069 [2024-07-23 10:27:37.313810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.069 [2024-07-23 10:27:37.330122] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:49.069 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.069 INFO: Seed: 869744055 00:06:49.069 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:49.069 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:49.069 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:49.069 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.069 #2 INITED exec/s: 0 rss: 64Mb 00:06:49.069 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.069 This may also happen if the target rejected all inputs we tried so far 00:06:49.069 [2024-07-23 10:27:37.408226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.069 [2024-07-23 10:27:37.408290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.069 [2024-07-23 10:27:37.408422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.069 [2024-07-23 10:27:37.408463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.069 [2024-07-23 10:27:37.408585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.069 [2024-07-23 10:27:37.408614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.069 [2024-07-23 10:27:37.408747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.069 [2024-07-23 10:27:37.408790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.328 NEW_FUNC[1/693]: 0x4ad7f0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:49.328 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.328 #32 NEW cov: 11925 ft: 11920 corp: 2/115b lim: 120 exec/s: 0 rss: 70Mb L: 114/114 MS: 5 ChangeByte-ChangeBit-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:06:49.328 [2024-07-23 10:27:37.748518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.748560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.328 [2024-07-23 10:27:37.748656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.748682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.328 [2024-07-23 10:27:37.748800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.748820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.328 [2024-07-23 10:27:37.748926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.748946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.328 #33 NEW cov: 12055 ft: 12463 corp: 3/229b lim: 120 exec/s: 0 rss: 71Mb L: 114/114 MS: 1 ShuffleBytes- 00:06:49.328 [2024-07-23 10:27:37.818672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.818705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.328 [2024-07-23 10:27:37.818772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.818793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.328 [2024-07-23 10:27:37.818864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.818884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.328 [2024-07-23 10:27:37.818979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.328 [2024-07-23 10:27:37.819000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.587 #37 NEW cov: 12061 ft: 12830 corp: 4/341b lim: 120 exec/s: 0 rss: 71Mb L: 112/114 MS: 4 InsertByte-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:06:49.587 [2024-07-23 10:27:37.869206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.869238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.587 [2024-07-23 10:27:37.869302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.869323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.587 [2024-07-23 10:27:37.869372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.869389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.587 [2024-07-23 10:27:37.869491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.869509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.587 #43 NEW cov: 12146 ft: 13058 corp: 5/455b lim: 120 exec/s: 0 rss: 71Mb L: 114/114 MS: 1 ShuffleBytes- 00:06:49.587 [2024-07-23 10:27:37.929729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.929760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.587 [2024-07-23 10:27:37.929838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.929858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.587 [2024-07-23 10:27:37.929933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.929964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.587 [2024-07-23 10:27:37.930062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.930081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.587 #44 NEW cov: 12146 ft: 13202 corp: 6/569b lim: 120 exec/s: 0 rss: 71Mb L: 114/114 MS: 1 ShuffleBytes- 00:06:49.587 [2024-07-23 10:27:37.979866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.587 [2024-07-23 10:27:37.979896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.588 [2024-07-23 10:27:37.979980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.588 [2024-07-23 10:27:37.979999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.588 [2024-07-23 10:27:37.980066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.588 [2024-07-23 10:27:37.980088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.588 [2024-07-23 10:27:37.980193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.588 [2024-07-23 10:27:37.980218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.588 #45 NEW cov: 12146 ft: 13269 corp: 7/683b lim: 120 exec/s: 0 rss: 71Mb L: 114/114 MS: 1 ShuffleBytes- 00:06:49.588 [2024-07-23 10:27:38.040311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.588 [2024-07-23 10:27:38.040340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.588 [2024-07-23 10:27:38.040422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.588 [2024-07-23 10:27:38.040442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.588 [2024-07-23 10:27:38.040515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.588 [2024-07-23 10:27:38.040533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.588 [2024-07-23 10:27:38.040622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.588 [2024-07-23 10:27:38.040645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.588 #46 NEW cov: 12146 ft: 13328 corp: 8/798b lim: 120 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 InsertByte- 00:06:49.847 [2024-07-23 10:27:38.110514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.110546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.110621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.110639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.110725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15336006690509477076 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.110746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.110846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.110865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.847 #47 NEW cov: 12146 ft: 13373 corp: 9/910b lim: 120 exec/s: 0 rss: 72Mb L: 112/115 MS: 1 ChangeBinInt- 00:06:49.847 [2024-07-23 10:27:38.180757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.180794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.180884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.180904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.180979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.180998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.181104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.181123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.847 #48 NEW cov: 12146 ft: 13430 corp: 10/1022b lim: 120 exec/s: 0 rss: 72Mb L: 112/115 MS: 1 ChangeBit- 00:06:49.847 [2024-07-23 10:27:38.230970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.231002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.231090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.231112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.231162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.231180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.231276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.231295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.847 #49 NEW cov: 12146 ft: 13522 corp: 11/1141b lim: 120 exec/s: 0 rss: 72Mb L: 119/119 MS: 1 CopyPart- 00:06:49.847 [2024-07-23 10:27:38.281209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.847 [2024-07-23 10:27:38.281240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.847 [2024-07-23 10:27:38.281327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.848 [2024-07-23 10:27:38.281347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.848 [2024-07-23 10:27:38.281416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.848 [2024-07-23 10:27:38.281435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.848 [2024-07-23 10:27:38.281538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.848 [2024-07-23 10:27:38.281557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.848 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:49.848 #50 NEW cov: 12169 ft: 13558 corp: 12/1254b lim: 120 exec/s: 0 rss: 72Mb L: 113/119 MS: 1 InsertByte- 00:06:49.848 [2024-07-23 10:27:38.331390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.848 [2024-07-23 10:27:38.331421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.848 [2024-07-23 10:27:38.331490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.848 [2024-07-23 10:27:38.331509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.848 [2024-07-23 10:27:38.331579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15336116212175525076 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.848 [2024-07-23 10:27:38.331600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.848 [2024-07-23 10:27:38.331706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.848 [2024-07-23 10:27:38.331726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.107 #51 NEW cov: 12169 ft: 13590 corp: 13/1367b lim: 120 exec/s: 0 rss: 72Mb L: 113/119 MS: 1 InsertByte- 00:06:50.107 [2024-07-23 10:27:38.391715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.107 [2024-07-23 10:27:38.391747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.107 [2024-07-23 10:27:38.391818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.107 [2024-07-23 10:27:38.391839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.107 [2024-07-23 10:27:38.391882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.107 [2024-07-23 10:27:38.391905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.107 [2024-07-23 10:27:38.392006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.107 [2024-07-23 10:27:38.392029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.107 #52 NEW cov: 12169 ft: 13602 corp: 14/1486b lim: 120 exec/s: 52 rss: 72Mb L: 119/119 MS: 1 CopyPart- 00:06:50.107 [2024-07-23 10:27:38.441939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.107 [2024-07-23 10:27:38.441968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.107 [2024-07-23 10:27:38.442043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15276209939611440340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.107 [2024-07-23 10:27:38.442062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.107 [2024-07-23 10:27:38.442130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.107 [2024-07-23 10:27:38.442151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.107 [2024-07-23 10:27:38.442253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.442275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.108 #53 NEW cov: 12169 ft: 13658 corp: 15/1599b lim: 120 exec/s: 53 rss: 72Mb L: 113/119 MS: 1 CrossOver- 00:06:50.108 [2024-07-23 10:27:38.502113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.502144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.108 [2024-07-23 10:27:38.502218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15276209939611440340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.502242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.108 [2024-07-23 10:27:38.502295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.502315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.108 [2024-07-23 10:27:38.502414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.502437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.108 #54 NEW cov: 12169 ft: 13677 corp: 16/1712b lim: 120 exec/s: 54 rss: 72Mb L: 113/119 MS: 1 ShuffleBytes- 00:06:50.108 [2024-07-23 10:27:38.572331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.572361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.108 [2024-07-23 10:27:38.572448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15276209939611440340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.572470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.108 [2024-07-23 10:27:38.572533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.572554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.108 [2024-07-23 10:27:38.572651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.108 [2024-07-23 10:27:38.572672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.108 #65 NEW cov: 12169 ft: 13743 corp: 17/1825b lim: 120 exec/s: 65 rss: 72Mb L: 113/119 MS: 1 ShuffleBytes- 00:06:50.368 [2024-07-23 10:27:38.622532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.622564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.622641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.622663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.622720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.622742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.622836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1152921504606846976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.622855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.368 #66 NEW cov: 12169 ft: 13757 corp: 18/1939b lim: 120 exec/s: 66 rss: 72Mb L: 114/119 MS: 1 ChangeBit- 00:06:50.368 [2024-07-23 10:27:38.692846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.692878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.692949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:70650219154374656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.692971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.693037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.693056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.693159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.693178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.368 #67 NEW cov: 12169 ft: 13773 corp: 19/2053b lim: 120 exec/s: 67 rss: 72Mb L: 114/119 MS: 1 ChangeBinInt- 00:06:50.368 [2024-07-23 10:27:38.743045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.743076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.743165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:70650219154374656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.743183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.743246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.743271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.743367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.743385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.368 #68 NEW cov: 12169 ft: 13805 corp: 20/2167b lim: 120 exec/s: 68 rss: 72Mb L: 114/119 MS: 1 ShuffleBytes- 00:06:50.368 [2024-07-23 10:27:38.813415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.813448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.813541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15276209939611440340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.813559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.813660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.813681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.368 [2024-07-23 10:27:38.813785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.368 [2024-07-23 10:27:38.813807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.368 #69 NEW cov: 12169 ft: 13826 corp: 21/2281b lim: 120 exec/s: 69 rss: 73Mb L: 114/119 MS: 1 CrossOver- 00:06:50.628 [2024-07-23 10:27:38.883612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.883643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:38.883710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.883728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:38.883802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.883820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:38.883922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4503599627370496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.883941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.628 #70 NEW cov: 12169 ft: 13871 corp: 22/2396b lim: 120 exec/s: 70 rss: 73Mb L: 115/119 MS: 1 InsertByte- 00:06:50.628 [2024-07-23 10:27:38.943862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.943895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:38.943998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15276209939611440340 len:48574 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.944017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:38.944115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.944134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:38.944234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:38.944250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.628 #71 NEW cov: 12169 ft: 13910 corp: 23/2515b lim: 120 exec/s: 71 rss: 73Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:06:50.628 [2024-07-23 10:27:39.004132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.004165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.004228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.004251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.004321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.004339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.004432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:32 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.004448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.628 #72 NEW cov: 12169 ft: 13914 corp: 24/2631b lim: 120 exec/s: 72 rss: 73Mb L: 116/119 MS: 1 CMP- DE: "\000\037"- 00:06:50.628 [2024-07-23 10:27:39.054841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.054871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.054977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.054997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.055056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.055076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.055171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.055192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.055291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.055310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.628 #73 NEW cov: 12169 ft: 13966 corp: 25/2751b lim: 120 exec/s: 73 rss: 73Mb L: 120/120 MS: 1 CopyPart- 00:06:50.628 [2024-07-23 10:27:39.115089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.115122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.115202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.115224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.115279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.115299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.115397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.115417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.628 [2024-07-23 10:27:39.115510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.628 [2024-07-23 10:27:39.115531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.888 #74 NEW cov: 12169 ft: 14005 corp: 26/2871b lim: 120 exec/s: 74 rss: 73Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:06:50.888 [2024-07-23 10:27:39.164820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.164848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.164917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.164937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.164999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.165019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.888 #75 NEW cov: 12169 ft: 14354 corp: 27/2964b lim: 120 exec/s: 75 rss: 73Mb L: 93/120 MS: 1 EraseBytes- 00:06:50.888 [2024-07-23 10:27:39.225379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.225410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.225475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.225496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.225568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.225588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.225688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.225708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.888 #76 NEW cov: 12169 ft: 14356 corp: 28/3079b lim: 120 exec/s: 76 rss: 73Mb L: 115/120 MS: 1 EraseBytes- 00:06:50.888 [2024-07-23 10:27:39.275896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.275926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.276013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.276036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.276096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.276114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.276211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.276231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.276323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.276344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.888 #77 NEW cov: 12169 ft: 14398 corp: 29/3199b lim: 120 exec/s: 77 rss: 73Mb L: 120/120 MS: 1 ChangeBit- 00:06:50.888 [2024-07-23 10:27:39.345686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15336116638186132692 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.345716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.345798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.345817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.345880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15336116212175525076 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.345896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.888 [2024-07-23 10:27:39.346004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.888 [2024-07-23 10:27:39.346025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.888 #78 NEW cov: 12169 ft: 14429 corp: 30/3312b lim: 120 exec/s: 39 rss: 73Mb L: 113/120 MS: 1 CopyPart- 00:06:50.888 #78 DONE cov: 12169 ft: 14429 corp: 30/3312b lim: 120 exec/s: 39 rss: 73Mb 00:06:50.888 ###### Recommended dictionary. ###### 00:06:50.888 "\000\037" # Uses: 0 00:06:50.888 ###### End of recommended dictionary. ###### 00:06:50.888 Done 78 runs in 2 second(s) 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.154 10:27:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:51.154 [2024-07-23 10:27:39.539214] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:51.154 [2024-07-23 10:27:39.539293] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432228 ] 00:06:51.154 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.559 [2024-07-23 10:27:39.840902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.559 [2024-07-23 10:27:39.875066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.559 [2024-07-23 10:27:39.928150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.559 [2024-07-23 10:27:39.944485] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:51.559 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.559 INFO: Seed: 3483749703 00:06:51.830 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:51.830 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:51.830 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:51.830 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.830 #2 INITED exec/s: 0 rss: 64Mb 00:06:51.830 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:51.830 This may also happen if the target rejected all inputs we tried so far 00:06:51.830 [2024-07-23 10:27:39.999927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:51.830 [2024-07-23 10:27:39.999960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.830 [2024-07-23 10:27:40.000016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:51.830 [2024-07-23 10:27:40.000032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.830 [2024-07-23 10:27:40.000084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:51.830 [2024-07-23 10:27:40.000098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.830 [2024-07-23 10:27:40.000150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:51.830 [2024-07-23 10:27:40.000163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.830 NEW_FUNC[1/691]: 0x4b10e0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:51.830 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.830 #24 NEW cov: 11868 ft: 11865 corp: 2/81b lim: 100 exec/s: 0 rss: 70Mb L: 80/80 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:52.088 [2024-07-23 10:27:40.340419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.088 [2024-07-23 10:27:40.340472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.088 #28 NEW cov: 11998 ft: 12947 corp: 3/104b lim: 100 exec/s: 0 rss: 70Mb L: 23/80 MS: 4 InsertByte-CrossOver-ChangeBit-CopyPart- 00:06:52.088 [2024-07-23 10:27:40.380394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.088 [2024-07-23 10:27:40.380423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.088 #29 NEW cov: 12004 ft: 13206 corp: 4/127b lim: 100 exec/s: 0 rss: 70Mb L: 23/80 MS: 1 ChangeBinInt- 00:06:52.088 [2024-07-23 10:27:40.430518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.088 [2024-07-23 10:27:40.430546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.089 #30 NEW cov: 12089 ft: 13444 corp: 5/151b lim: 100 exec/s: 0 rss: 70Mb L: 24/80 MS: 1 InsertByte- 00:06:52.089 [2024-07-23 10:27:40.480649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.089 [2024-07-23 10:27:40.480676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.089 #36 NEW cov: 12089 ft: 13529 corp: 6/176b lim: 100 exec/s: 0 rss: 71Mb L: 25/80 MS: 1 InsertByte- 00:06:52.089 [2024-07-23 10:27:40.531144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.089 [2024-07-23 10:27:40.531171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.089 [2024-07-23 10:27:40.531217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.089 [2024-07-23 10:27:40.531232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.089 [2024-07-23 10:27:40.531284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.089 [2024-07-23 10:27:40.531313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.089 [2024-07-23 10:27:40.531366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:52.089 [2024-07-23 10:27:40.531380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.089 #42 NEW cov: 12089 ft: 13624 corp: 7/257b lim: 100 exec/s: 0 rss: 71Mb L: 81/81 MS: 1 InsertByte- 00:06:52.089 [2024-07-23 10:27:40.580940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.089 [2024-07-23 10:27:40.580967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.348 #48 NEW cov: 12089 ft: 13781 corp: 8/282b lim: 100 exec/s: 0 rss: 71Mb L: 25/81 MS: 1 InsertByte- 00:06:52.348 [2024-07-23 10:27:40.620995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.348 [2024-07-23 10:27:40.621021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.348 #49 NEW cov: 12089 ft: 13790 corp: 9/305b lim: 100 exec/s: 0 rss: 71Mb L: 23/81 MS: 1 EraseBytes- 00:06:52.348 [2024-07-23 10:27:40.661184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.348 [2024-07-23 10:27:40.661209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.348 #50 NEW cov: 12089 ft: 13848 corp: 10/330b lim: 100 exec/s: 0 rss: 71Mb L: 25/81 MS: 1 ShuffleBytes- 00:06:52.348 [2024-07-23 10:27:40.711287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.348 [2024-07-23 10:27:40.711312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.348 #51 NEW cov: 12089 ft: 13890 corp: 11/356b lim: 100 exec/s: 0 rss: 71Mb L: 26/81 MS: 1 InsertByte- 00:06:52.348 [2024-07-23 10:27:40.761784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.348 [2024-07-23 10:27:40.761810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.348 [2024-07-23 10:27:40.761865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.348 [2024-07-23 10:27:40.761879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.348 [2024-07-23 10:27:40.761927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.348 [2024-07-23 10:27:40.761941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.348 [2024-07-23 10:27:40.761992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:52.348 [2024-07-23 10:27:40.762007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.348 #52 NEW cov: 12089 ft: 13925 corp: 12/436b lim: 100 exec/s: 0 rss: 72Mb L: 80/81 MS: 1 ShuffleBytes- 00:06:52.348 [2024-07-23 10:27:40.801905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.348 [2024-07-23 10:27:40.801930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.348 [2024-07-23 10:27:40.801981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.348 [2024-07-23 10:27:40.801996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.348 [2024-07-23 10:27:40.802045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.348 [2024-07-23 10:27:40.802059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.348 [2024-07-23 10:27:40.802110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:52.348 [2024-07-23 10:27:40.802125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.348 #53 NEW cov: 12089 ft: 14045 corp: 13/517b lim: 100 exec/s: 0 rss: 72Mb L: 81/81 MS: 1 ShuffleBytes- 00:06:52.607 [2024-07-23 10:27:40.851703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.607 [2024-07-23 10:27:40.851728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.607 #54 NEW cov: 12089 ft: 14068 corp: 14/555b lim: 100 exec/s: 0 rss: 72Mb L: 38/81 MS: 1 CopyPart- 00:06:52.607 [2024-07-23 10:27:40.892007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.607 [2024-07-23 10:27:40.892032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.607 [2024-07-23 10:27:40.892068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.607 [2024-07-23 10:27:40.892087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.607 [2024-07-23 10:27:40.892141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.607 [2024-07-23 10:27:40.892172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.607 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:52.607 #55 NEW cov: 12112 ft: 14347 corp: 15/619b lim: 100 exec/s: 0 rss: 72Mb L: 64/81 MS: 1 CrossOver- 00:06:52.607 [2024-07-23 10:27:40.932237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.607 [2024-07-23 10:27:40.932263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.607 [2024-07-23 10:27:40.932330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.608 [2024-07-23 10:27:40.932343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.608 [2024-07-23 10:27:40.932394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.608 [2024-07-23 10:27:40.932408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.608 [2024-07-23 10:27:40.932460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:52.608 [2024-07-23 10:27:40.932475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.608 #56 NEW cov: 12112 ft: 14383 corp: 16/706b lim: 100 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:06:52.608 [2024-07-23 10:27:40.982024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.608 [2024-07-23 10:27:40.982050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.608 #62 NEW cov: 12112 ft: 14386 corp: 17/729b lim: 100 exec/s: 62 rss: 72Mb L: 23/87 MS: 1 ChangeByte- 00:06:52.608 [2024-07-23 10:27:41.022367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.608 [2024-07-23 10:27:41.022393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.608 [2024-07-23 10:27:41.022435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.608 [2024-07-23 10:27:41.022450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.608 [2024-07-23 10:27:41.022501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.608 [2024-07-23 10:27:41.022516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.608 #63 NEW cov: 12112 ft: 14400 corp: 18/793b lim: 100 exec/s: 63 rss: 72Mb L: 64/87 MS: 1 ChangeByte- 00:06:52.608 [2024-07-23 10:27:41.072503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.608 [2024-07-23 10:27:41.072528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.608 [2024-07-23 10:27:41.072562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.608 [2024-07-23 10:27:41.072577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.608 [2024-07-23 10:27:41.072628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.608 [2024-07-23 10:27:41.072642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.608 #64 NEW cov: 12112 ft: 14439 corp: 19/857b lim: 100 exec/s: 64 rss: 72Mb L: 64/87 MS: 1 CopyPart- 00:06:52.866 [2024-07-23 10:27:41.112646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.866 [2024-07-23 10:27:41.112672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.112719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.866 [2024-07-23 10:27:41.112733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.112786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.866 [2024-07-23 10:27:41.112801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.866 #65 NEW cov: 12112 ft: 14474 corp: 20/930b lim: 100 exec/s: 65 rss: 72Mb L: 73/87 MS: 1 CrossOver- 00:06:52.866 [2024-07-23 10:27:41.162898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.866 [2024-07-23 10:27:41.162926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.162990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.866 [2024-07-23 10:27:41.163005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.163056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.866 [2024-07-23 10:27:41.163071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.163126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:52.866 [2024-07-23 10:27:41.163141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.866 #66 NEW cov: 12112 ft: 14482 corp: 21/1012b lim: 100 exec/s: 66 rss: 72Mb L: 82/87 MS: 1 InsertByte- 00:06:52.866 [2024-07-23 10:27:41.202771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.866 [2024-07-23 10:27:41.202800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.202837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.866 [2024-07-23 10:27:41.202851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.866 #67 NEW cov: 12112 ft: 14718 corp: 22/1052b lim: 100 exec/s: 67 rss: 72Mb L: 40/87 MS: 1 CopyPart- 00:06:52.866 [2024-07-23 10:27:41.243119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.866 [2024-07-23 10:27:41.243146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.243192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.866 [2024-07-23 10:27:41.243207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.243259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.866 [2024-07-23 10:27:41.243273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.243327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:52.866 [2024-07-23 10:27:41.243344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.866 #68 NEW cov: 12112 ft: 14738 corp: 23/1147b lim: 100 exec/s: 68 rss: 72Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:06:52.866 [2024-07-23 10:27:41.293157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.866 [2024-07-23 10:27:41.293184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.293228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:52.866 [2024-07-23 10:27:41.293244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.866 [2024-07-23 10:27:41.293298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:52.866 [2024-07-23 10:27:41.293314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.866 #69 NEW cov: 12112 ft: 14752 corp: 24/1211b lim: 100 exec/s: 69 rss: 72Mb L: 64/95 MS: 1 CopyPart- 00:06:52.866 [2024-07-23 10:27:41.333036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:52.866 [2024-07-23 10:27:41.333062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.124 #70 NEW cov: 12112 ft: 14830 corp: 25/1238b lim: 100 exec/s: 70 rss: 72Mb L: 27/95 MS: 1 InsertByte- 00:06:53.124 [2024-07-23 10:27:41.383176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.124 [2024-07-23 10:27:41.383203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.124 [2024-07-23 10:27:41.423292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.124 [2024-07-23 10:27:41.423318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.124 #72 NEW cov: 12112 ft: 14848 corp: 26/1265b lim: 100 exec/s: 72 rss: 72Mb L: 27/95 MS: 2 InsertByte-InsertByte- 00:06:53.125 [2024-07-23 10:27:41.463407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.125 [2024-07-23 10:27:41.463435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.125 #73 NEW cov: 12112 ft: 14876 corp: 27/1294b lim: 100 exec/s: 73 rss: 72Mb L: 29/95 MS: 1 CrossOver- 00:06:53.125 [2024-07-23 10:27:41.513531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.125 [2024-07-23 10:27:41.513556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.125 #74 NEW cov: 12112 ft: 14934 corp: 28/1328b lim: 100 exec/s: 74 rss: 72Mb L: 34/95 MS: 1 EraseBytes- 00:06:53.125 [2024-07-23 10:27:41.563672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.125 [2024-07-23 10:27:41.563698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.125 #75 NEW cov: 12112 ft: 14942 corp: 29/1353b lim: 100 exec/s: 75 rss: 73Mb L: 25/95 MS: 1 ShuffleBytes- 00:06:53.125 [2024-07-23 10:27:41.603765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.125 [2024-07-23 10:27:41.603797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.383 #76 NEW cov: 12112 ft: 14959 corp: 30/1387b lim: 100 exec/s: 76 rss: 73Mb L: 34/95 MS: 1 CopyPart- 00:06:53.383 [2024-07-23 10:27:41.654197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.383 [2024-07-23 10:27:41.654222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.383 [2024-07-23 10:27:41.654258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:53.383 [2024-07-23 10:27:41.654272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.383 [2024-07-23 10:27:41.654325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:53.383 [2024-07-23 10:27:41.654340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.383 #77 NEW cov: 12112 ft: 14973 corp: 31/1460b lim: 100 exec/s: 77 rss: 73Mb L: 73/95 MS: 1 ShuffleBytes- 00:06:53.383 [2024-07-23 10:27:41.704066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.383 [2024-07-23 10:27:41.704092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.383 #78 NEW cov: 12112 ft: 14979 corp: 32/1483b lim: 100 exec/s: 78 rss: 73Mb L: 23/95 MS: 1 CopyPart- 00:06:53.383 [2024-07-23 10:27:41.744291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.383 [2024-07-23 10:27:41.744317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.383 [2024-07-23 10:27:41.744352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:53.383 [2024-07-23 10:27:41.744366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.383 #79 NEW cov: 12112 ft: 14984 corp: 33/1535b lim: 100 exec/s: 79 rss: 73Mb L: 52/95 MS: 1 CopyPart- 00:06:53.383 [2024-07-23 10:27:41.784277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.383 [2024-07-23 10:27:41.784303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.383 #80 NEW cov: 12112 ft: 15003 corp: 34/1562b lim: 100 exec/s: 80 rss: 73Mb L: 27/95 MS: 1 EraseBytes- 00:06:53.383 [2024-07-23 10:27:41.824709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.383 [2024-07-23 10:27:41.824734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.383 [2024-07-23 10:27:41.824791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:53.383 [2024-07-23 10:27:41.824806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.383 [2024-07-23 10:27:41.824855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:53.383 [2024-07-23 10:27:41.824869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.383 [2024-07-23 10:27:41.824919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:53.383 [2024-07-23 10:27:41.824932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.383 #81 NEW cov: 12112 ft: 15005 corp: 35/1657b lim: 100 exec/s: 81 rss: 73Mb L: 95/95 MS: 1 ChangeBinInt- 00:06:53.383 [2024-07-23 10:27:41.874538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.383 [2024-07-23 10:27:41.874562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.642 #82 NEW cov: 12112 ft: 15033 corp: 36/1680b lim: 100 exec/s: 82 rss: 73Mb L: 23/95 MS: 1 ChangeBit- 00:06:53.642 [2024-07-23 10:27:41.914973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.642 [2024-07-23 10:27:41.914999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.642 [2024-07-23 10:27:41.915041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:53.642 [2024-07-23 10:27:41.915057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.642 [2024-07-23 10:27:41.915105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:53.642 [2024-07-23 10:27:41.915120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.642 [2024-07-23 10:27:41.915170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:53.642 [2024-07-23 10:27:41.915182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.642 #83 NEW cov: 12112 ft: 15059 corp: 37/1775b lim: 100 exec/s: 83 rss: 73Mb L: 95/95 MS: 1 ShuffleBytes- 00:06:53.642 [2024-07-23 10:27:41.964790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:53.642 [2024-07-23 10:27:41.964816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.642 #84 NEW cov: 12112 ft: 15069 corp: 38/1810b lim: 100 exec/s: 42 rss: 73Mb L: 35/95 MS: 1 CopyPart- 00:06:53.642 #84 DONE cov: 12112 ft: 15069 corp: 38/1810b lim: 100 exec/s: 42 rss: 73Mb 00:06:53.642 Done 84 runs in 2 second(s) 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.642 10:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:53.901 [2024-07-23 10:27:42.150083] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:53.901 [2024-07-23 10:27:42.150163] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432598 ] 00:06:53.901 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.160 [2024-07-23 10:27:42.446643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.160 [2024-07-23 10:27:42.480256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.160 [2024-07-23 10:27:42.533093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.160 [2024-07-23 10:27:42.549414] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:54.160 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.160 INFO: Seed: 1791770576 00:06:54.160 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:54.160 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:54.160 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:54.160 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.160 #2 INITED exec/s: 0 rss: 64Mb 00:06:54.160 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.160 This may also happen if the target rejected all inputs we tried so far 00:06:54.160 [2024-07-23 10:27:42.597979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:704643072 len:17707 00:06:54.160 [2024-07-23 10:27:42.598011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.420 NEW_FUNC[1/691]: 0x4b40a0 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:06:54.420 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:54.420 #26 NEW cov: 11846 ft: 11845 corp: 2/11b lim: 50 exec/s: 0 rss: 70Mb L: 10/10 MS: 4 ChangeByte-InsertByte-CopyPart-CMP- DE: "\000\000\000\000\000\000\000E"- 00:06:54.420 [2024-07-23 10:27:42.918850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 00:06:54.420 [2024-07-23 10:27:42.918904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.679 #27 NEW cov: 11976 ft: 12403 corp: 3/29b lim: 50 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:54.679 [2024-07-23 10:27:42.958814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 00:06:54.679 [2024-07-23 10:27:42.958846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.679 #28 NEW cov: 11982 ft: 12818 corp: 4/47b lim: 50 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 ChangeBinInt- 00:06:54.679 [2024-07-23 10:27:43.008998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13510799586754560 len:70 00:06:54.679 [2024-07-23 10:27:43.009027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.679 #29 NEW cov: 12067 ft: 13088 corp: 5/58b lim: 50 exec/s: 0 rss: 71Mb L: 11/18 MS: 1 InsertByte- 00:06:54.679 [2024-07-23 10:27:43.059298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9838263503864498312 len:34953 00:06:54.679 [2024-07-23 10:27:43.059327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.679 [2024-07-23 10:27:43.059362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9838263505978427528 len:34953 00:06:54.679 [2024-07-23 10:27:43.059378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.679 [2024-07-23 10:27:43.059431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9838263505978427528 len:34953 00:06:54.679 [2024-07-23 10:27:43.059447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.679 #30 NEW cov: 12067 ft: 13549 corp: 6/88b lim: 50 exec/s: 0 rss: 71Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:54.679 [2024-07-23 10:27:43.099230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12593317564001611693 len:6145 00:06:54.679 [2024-07-23 10:27:43.099258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.679 #31 NEW cov: 12067 ft: 13617 corp: 7/99b lim: 50 exec/s: 0 rss: 71Mb L: 11/30 MS: 1 CMP- DE: "\357\255\256\304s\232\030\000"- 00:06:54.679 [2024-07-23 10:27:43.149565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:704643072 len:17707 00:06:54.679 [2024-07-23 10:27:43.149592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.679 [2024-07-23 10:27:43.149624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:54.679 [2024-07-23 10:27:43.149639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.679 [2024-07-23 10:27:43.149692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:54.679 [2024-07-23 10:27:43.149706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.679 #32 NEW cov: 12067 ft: 13673 corp: 8/136b lim: 50 exec/s: 0 rss: 71Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:54.939 [2024-07-23 10:27:43.189647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:704643072 len:17707 00:06:54.939 [2024-07-23 10:27:43.189674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.189712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:54.939 [2024-07-23 10:27:43.189728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.189786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:54.939 [2024-07-23 10:27:43.189818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.939 #33 NEW cov: 12067 ft: 13729 corp: 9/167b lim: 50 exec/s: 0 rss: 71Mb L: 31/37 MS: 1 EraseBytes- 00:06:54.939 [2024-07-23 10:27:43.239589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:704643072 len:17707 00:06:54.939 [2024-07-23 10:27:43.239617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.939 #39 NEW cov: 12067 ft: 13804 corp: 10/177b lim: 50 exec/s: 0 rss: 71Mb L: 10/37 MS: 1 CopyPart- 00:06:54.939 [2024-07-23 10:27:43.279805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 00:06:54.939 [2024-07-23 10:27:43.279832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.279867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 00:06:54.939 [2024-07-23 10:27:43.279882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.939 #40 NEW cov: 12067 ft: 14082 corp: 11/205b lim: 50 exec/s: 0 rss: 71Mb L: 28/37 MS: 1 CopyPart- 00:06:54.939 [2024-07-23 10:27:43.320031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:704643072 len:17707 00:06:54.939 [2024-07-23 10:27:43.320060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.320095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:54.939 [2024-07-23 10:27:43.320112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.320164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:54.939 [2024-07-23 10:27:43.320181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.939 #41 NEW cov: 12067 ft: 14122 corp: 12/236b lim: 50 exec/s: 0 rss: 72Mb L: 31/37 MS: 1 ShuffleBytes- 00:06:54.939 [2024-07-23 10:27:43.370190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2233785415880409088 len:17707 00:06:54.939 [2024-07-23 10:27:43.370216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.370268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:54.939 [2024-07-23 10:27:43.370285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.370339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:54.939 [2024-07-23 10:27:43.370356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.939 #42 NEW cov: 12067 ft: 14149 corp: 13/267b lim: 50 exec/s: 0 rss: 72Mb L: 31/37 MS: 1 ChangeBinInt- 00:06:54.939 [2024-07-23 10:27:43.410158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:06:54.939 [2024-07-23 10:27:43.410184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.939 [2024-07-23 10:27:43.410219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:06:54.939 [2024-07-23 10:27:43.410234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.939 #43 NEW cov: 12067 ft: 14170 corp: 14/291b lim: 50 exec/s: 0 rss: 72Mb L: 24/37 MS: 1 InsertRepeatedBytes- 00:06:55.198 [2024-07-23 10:27:43.450302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:06:55.198 [2024-07-23 10:27:43.450328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.198 [2024-07-23 10:27:43.450362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057594037927937 len:1 00:06:55.198 [2024-07-23 10:27:43.450377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.198 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:55.198 #44 NEW cov: 12090 ft: 14185 corp: 15/315b lim: 50 exec/s: 0 rss: 72Mb L: 24/37 MS: 1 ChangeBinInt- 00:06:55.198 [2024-07-23 10:27:43.500556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 00:06:55.198 [2024-07-23 10:27:43.500583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.198 [2024-07-23 10:27:43.500622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3761758566050182196 len:11484 00:06:55.198 [2024-07-23 10:27:43.500638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.198 [2024-07-23 10:27:43.500693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3761688988727515136 len:13365 00:06:55.198 [2024-07-23 10:27:43.500709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.198 #45 NEW cov: 12090 ft: 14230 corp: 16/351b lim: 50 exec/s: 0 rss: 72Mb L: 36/37 MS: 1 CMP- DE: "s|,\333x\232\030\000"- 00:06:55.199 [2024-07-23 10:27:43.550610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13510799586754560 len:13365 00:06:55.199 [2024-07-23 10:27:43.550637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.199 [2024-07-23 10:27:43.550684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3761688764241687604 len:17707 00:06:55.199 [2024-07-23 10:27:43.550700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.199 #46 NEW cov: 12090 ft: 14257 corp: 17/371b lim: 50 exec/s: 0 rss: 72Mb L: 20/37 MS: 1 CrossOver- 00:06:55.199 [2024-07-23 10:27:43.590664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13360 00:06:55.199 [2024-07-23 10:27:43.590691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.199 [2024-07-23 10:27:43.590741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 00:06:55.199 [2024-07-23 10:27:43.590756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.199 #47 NEW cov: 12090 ft: 14279 corp: 18/399b lim: 50 exec/s: 47 rss: 72Mb L: 28/37 MS: 1 ChangeBinInt- 00:06:55.199 [2024-07-23 10:27:43.630785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761689640415015988 len:52172 00:06:55.199 [2024-07-23 10:27:43.630812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.199 [2024-07-23 10:27:43.630846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14786500877926255563 len:13365 00:06:55.199 [2024-07-23 10:27:43.630861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.199 #48 NEW cov: 12090 ft: 14289 corp: 19/427b lim: 50 exec/s: 48 rss: 72Mb L: 28/37 MS: 1 ChangeBinInt- 00:06:55.199 [2024-07-23 10:27:43.670855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688944630314036 len:53 00:06:55.199 [2024-07-23 10:27:43.670881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.199 #49 NEW cov: 12090 ft: 14310 corp: 20/437b lim: 50 exec/s: 49 rss: 72Mb L: 10/37 MS: 1 CrossOver- 00:06:55.458 [2024-07-23 10:27:43.711121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 00:06:55.458 [2024-07-23 10:27:43.711147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.458 [2024-07-23 10:27:43.711194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14693874269434898 len:1 00:06:55.458 [2024-07-23 10:27:43.711210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.458 [2024-07-23 10:27:43.711276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:06:55.458 [2024-07-23 10:27:43.711292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.458 #50 NEW cov: 12090 ft: 14331 corp: 21/472b lim: 50 exec/s: 50 rss: 72Mb L: 35/37 MS: 1 InsertRepeatedBytes- 00:06:55.458 [2024-07-23 10:27:43.761308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:587202560 len:17707 00:06:55.458 [2024-07-23 10:27:43.761335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.458 [2024-07-23 10:27:43.761369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:55.458 [2024-07-23 10:27:43.761384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.458 [2024-07-23 10:27:43.761436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:55.458 [2024-07-23 10:27:43.761450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.458 #51 NEW cov: 12090 ft: 14336 corp: 22/509b lim: 50 exec/s: 51 rss: 72Mb L: 37/37 MS: 1 ChangeByte- 00:06:55.458 [2024-07-23 10:27:43.801176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:69524315657207808 len:47659 00:06:55.458 [2024-07-23 10:27:43.801202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.458 #52 NEW cov: 12090 ft: 14365 corp: 23/519b lim: 50 exec/s: 52 rss: 72Mb L: 10/37 MS: 1 ChangeBinInt- 00:06:55.458 [2024-07-23 10:27:43.851578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1736164147877380120 len:6169 00:06:55.459 [2024-07-23 10:27:43.851605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.459 [2024-07-23 10:27:43.851640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1736164148113840152 len:6169 00:06:55.459 [2024-07-23 10:27:43.851654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.459 [2024-07-23 10:27:43.851705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1736164045034625048 len:1 00:06:55.459 [2024-07-23 10:27:43.851720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.459 #56 NEW cov: 12090 ft: 14378 corp: 24/550b lim: 50 exec/s: 56 rss: 72Mb L: 31/37 MS: 4 ShuffleBytes-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:06:55.459 [2024-07-23 10:27:43.891658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:704643072 len:17707 00:06:55.459 [2024-07-23 10:27:43.891684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.459 [2024-07-23 10:27:43.891729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:55.459 [2024-07-23 10:27:43.891746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.459 [2024-07-23 10:27:43.891800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13830536794479783919 len:61424 00:06:55.459 [2024-07-23 10:27:43.891817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.459 #57 NEW cov: 12090 ft: 14382 corp: 25/581b lim: 50 exec/s: 57 rss: 72Mb L: 31/37 MS: 1 ChangeByte- 00:06:55.459 [2024-07-23 10:27:43.941705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13510799586754560 len:13365 00:06:55.459 [2024-07-23 10:27:43.941732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.459 [2024-07-23 10:27:43.941789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3761688764241687604 len:17707 00:06:55.459 [2024-07-23 10:27:43.941805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.717 #58 NEW cov: 12090 ft: 14383 corp: 26/601b lim: 50 exec/s: 58 rss: 72Mb L: 20/37 MS: 1 ShuffleBytes- 00:06:55.717 [2024-07-23 10:27:43.991921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:46179488366592 len:1 00:06:55.717 [2024-07-23 10:27:43.991947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.717 [2024-07-23 10:27:43.991980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4984059747415097344 len:61424 00:06:55.717 [2024-07-23 10:27:43.991995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.717 [2024-07-23 10:27:43.992048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:55.717 [2024-07-23 10:27:43.992078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.717 #59 NEW cov: 12090 ft: 14387 corp: 27/638b lim: 50 exec/s: 59 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:55.717 [2024-07-23 10:27:44.041874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15814559520093600812 len:70 00:06:55.717 [2024-07-23 10:27:44.041904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.717 #60 NEW cov: 12090 ft: 14403 corp: 28/649b lim: 50 exec/s: 60 rss: 72Mb L: 11/37 MS: 1 PersAutoDict- DE: "s|,\333x\232\030\000"- 00:06:55.717 [2024-07-23 10:27:44.081968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579987005 len:13365 00:06:55.717 [2024-07-23 10:27:44.081996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.717 #61 NEW cov: 12090 ft: 14461 corp: 29/668b lim: 50 exec/s: 61 rss: 72Mb L: 19/37 MS: 1 InsertByte- 00:06:55.717 [2024-07-23 10:27:44.122025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987577234484 len:13365 00:06:55.717 [2024-07-23 10:27:44.122053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.717 #62 NEW cov: 12090 ft: 14476 corp: 30/686b lim: 50 exec/s: 62 rss: 72Mb L: 18/37 MS: 1 CrossOver- 00:06:55.717 [2024-07-23 10:27:44.162154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688944630314036 len:53 00:06:55.717 [2024-07-23 10:27:44.162183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.717 #63 NEW cov: 12090 ft: 14482 corp: 31/696b lim: 50 exec/s: 63 rss: 73Mb L: 10/37 MS: 1 ShuffleBytes- 00:06:55.717 [2024-07-23 10:27:44.212459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8329996853075357380 len:10753 00:06:55.717 [2024-07-23 10:27:44.212490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.976 #64 NEW cov: 12090 ft: 14503 corp: 32/714b lim: 50 exec/s: 64 rss: 73Mb L: 18/37 MS: 1 PersAutoDict- DE: "\357\255\256\304s\232\030\000"- 00:06:55.976 [2024-07-23 10:27:44.252466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12515156413414174506 len:39449 00:06:55.976 [2024-07-23 10:27:44.252496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.976 #65 NEW cov: 12090 ft: 14603 corp: 33/726b lim: 50 exec/s: 65 rss: 73Mb L: 12/37 MS: 1 InsertByte- 00:06:55.976 [2024-07-23 10:27:44.302628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3763940787393672253 len:13365 00:06:55.976 [2024-07-23 10:27:44.302657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.976 #66 NEW cov: 12090 ft: 14629 corp: 34/745b lim: 50 exec/s: 66 rss: 73Mb L: 19/37 MS: 1 ChangeBit- 00:06:55.976 [2024-07-23 10:27:44.352842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:46179488366592 len:1 00:06:55.976 [2024-07-23 10:27:44.352871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.976 [2024-07-23 10:27:44.352935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5039457541268963328 len:61424 00:06:55.976 [2024-07-23 10:27:44.352951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.976 #72 NEW cov: 12090 ft: 14658 corp: 35/768b lim: 50 exec/s: 72 rss: 73Mb L: 23/37 MS: 1 EraseBytes- 00:06:55.976 [2024-07-23 10:27:44.402862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12593317564001611611 len:6145 00:06:55.976 [2024-07-23 10:27:44.402889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.976 #73 NEW cov: 12090 ft: 14679 corp: 36/779b lim: 50 exec/s: 73 rss: 73Mb L: 11/37 MS: 1 ChangeByte- 00:06:55.976 [2024-07-23 10:27:44.442964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8690284821181377755 len:70 00:06:55.976 [2024-07-23 10:27:44.442991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.976 #74 NEW cov: 12090 ft: 14727 corp: 37/790b lim: 50 exec/s: 74 rss: 73Mb L: 11/37 MS: 1 PersAutoDict- DE: "s|,\333x\232\030\000"- 00:06:56.236 [2024-07-23 10:27:44.483448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:587202560 len:17707 00:06:56.236 [2024-07-23 10:27:44.483474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.236 [2024-07-23 10:27:44.483535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:56.236 [2024-07-23 10:27:44.483552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.236 [2024-07-23 10:27:44.483604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:56.236 [2024-07-23 10:27:44.483620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.236 [2024-07-23 10:27:44.483672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073441116159 len:65536 00:06:56.236 [2024-07-23 10:27:44.483688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.236 #75 NEW cov: 12090 ft: 15046 corp: 38/836b lim: 50 exec/s: 75 rss: 73Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:06:56.236 [2024-07-23 10:27:44.533466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 00:06:56.236 [2024-07-23 10:27:44.533492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.236 [2024-07-23 10:27:44.533542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14693874269434898 len:1 00:06:56.236 [2024-07-23 10:27:44.533560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.236 [2024-07-23 10:27:44.533615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13229323905400832 len:1 00:06:56.236 [2024-07-23 10:27:44.533631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.236 #76 NEW cov: 12090 ft: 15058 corp: 39/871b lim: 50 exec/s: 76 rss: 73Mb L: 35/46 MS: 1 ChangeByte- 00:06:56.236 [2024-07-23 10:27:44.583554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14155658054460243968 len:17707 00:06:56.236 [2024-07-23 10:27:44.583581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.236 [2024-07-23 10:27:44.583614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 00:06:56.236 [2024-07-23 10:27:44.583629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.236 [2024-07-23 10:27:44.583682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 00:06:56.236 [2024-07-23 10:27:44.583697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.236 #77 NEW cov: 12090 ft: 15071 corp: 40/902b lim: 50 exec/s: 38 rss: 73Mb L: 31/46 MS: 1 CrossOver- 00:06:56.236 #77 DONE cov: 12090 ft: 15071 corp: 40/902b lim: 50 exec/s: 38 rss: 73Mb 00:06:56.236 ###### Recommended dictionary. ###### 00:06:56.236 "\000\000\000\000\000\000\000E" # Uses: 1 00:06:56.236 "\357\255\256\304s\232\030\000" # Uses: 1 00:06:56.236 "s|,\333x\232\030\000" # Uses: 2 00:06:56.236 ###### End of recommended dictionary. ###### 00:06:56.236 Done 77 runs in 2 second(s) 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.496 10:27:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:06:56.496 [2024-07-23 10:27:44.789998] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:56.496 [2024-07-23 10:27:44.790076] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432969 ] 00:06:56.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.756 [2024-07-23 10:27:45.093876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.756 [2024-07-23 10:27:45.127197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.756 [2024-07-23 10:27:45.179814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.756 [2024-07-23 10:27:45.196140] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:56.756 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.756 INFO: Seed: 144806876 00:06:56.756 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:56.756 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:56.756 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:56.756 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.756 #2 INITED exec/s: 0 rss: 64Mb 00:06:56.756 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.756 This may also happen if the target rejected all inputs we tried so far 00:06:56.756 [2024-07-23 10:27:45.241538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:56.756 [2024-07-23 10:27:45.241572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.756 [2024-07-23 10:27:45.241626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:56.756 [2024-07-23 10:27:45.241644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.756 [2024-07-23 10:27:45.241701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:56.756 [2024-07-23 10:27:45.241719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.274 NEW_FUNC[1/693]: 0x4b5c60 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:06:57.274 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.274 #23 NEW cov: 11904 ft: 11905 corp: 2/66b lim: 90 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 InsertRepeatedBytes- 00:06:57.274 [2024-07-23 10:27:45.582256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.274 [2024-07-23 10:27:45.582296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.582363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.274 [2024-07-23 10:27:45.582380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.582433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.274 [2024-07-23 10:27:45.582450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.274 #24 NEW cov: 12034 ft: 12391 corp: 3/131b lim: 90 exec/s: 0 rss: 72Mb L: 65/65 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:57.274 [2024-07-23 10:27:45.632386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.274 [2024-07-23 10:27:45.632417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.632473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.274 [2024-07-23 10:27:45.632489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.632541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.274 [2024-07-23 10:27:45.632558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.274 #25 NEW cov: 12040 ft: 12708 corp: 4/202b lim: 90 exec/s: 0 rss: 72Mb L: 71/71 MS: 1 InsertRepeatedBytes- 00:06:57.274 [2024-07-23 10:27:45.672586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.274 [2024-07-23 10:27:45.672613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.672658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.274 [2024-07-23 10:27:45.672675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.672728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.274 [2024-07-23 10:27:45.672743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.672800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.274 [2024-07-23 10:27:45.672815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.274 #31 NEW cov: 12125 ft: 13360 corp: 5/286b lim: 90 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:06:57.274 [2024-07-23 10:27:45.722719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.274 [2024-07-23 10:27:45.722746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.722800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.274 [2024-07-23 10:27:45.722817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.722871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.274 [2024-07-23 10:27:45.722886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.722939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.274 [2024-07-23 10:27:45.722954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.274 #32 NEW cov: 12125 ft: 13456 corp: 6/364b lim: 90 exec/s: 0 rss: 72Mb L: 78/84 MS: 1 CopyPart- 00:06:57.274 [2024-07-23 10:27:45.762844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.274 [2024-07-23 10:27:45.762872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.762919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.274 [2024-07-23 10:27:45.762935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.762984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.274 [2024-07-23 10:27:45.763000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.274 [2024-07-23 10:27:45.763053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.274 [2024-07-23 10:27:45.763068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.534 #33 NEW cov: 12125 ft: 13566 corp: 7/439b lim: 90 exec/s: 0 rss: 72Mb L: 75/84 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:57.534 [2024-07-23 10:27:45.802850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.534 [2024-07-23 10:27:45.802877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.802913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.534 [2024-07-23 10:27:45.802930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.802982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.534 [2024-07-23 10:27:45.802998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.534 #34 NEW cov: 12125 ft: 13612 corp: 8/508b lim: 90 exec/s: 0 rss: 72Mb L: 69/84 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:57.534 [2024-07-23 10:27:45.843071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.534 [2024-07-23 10:27:45.843096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.843140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.534 [2024-07-23 10:27:45.843156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.843206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.534 [2024-07-23 10:27:45.843236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.843288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.534 [2024-07-23 10:27:45.843305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.534 #35 NEW cov: 12125 ft: 13634 corp: 9/593b lim: 90 exec/s: 0 rss: 72Mb L: 85/85 MS: 1 CopyPart- 00:06:57.534 [2024-07-23 10:27:45.893191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.534 [2024-07-23 10:27:45.893217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.893266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.534 [2024-07-23 10:27:45.893283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.893332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.534 [2024-07-23 10:27:45.893349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.893402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.534 [2024-07-23 10:27:45.893416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.534 #36 NEW cov: 12125 ft: 13708 corp: 10/666b lim: 90 exec/s: 0 rss: 72Mb L: 73/85 MS: 1 CrossOver- 00:06:57.534 [2024-07-23 10:27:45.943245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.534 [2024-07-23 10:27:45.943274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.943311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.534 [2024-07-23 10:27:45.943328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.943379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.534 [2024-07-23 10:27:45.943394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.534 #37 NEW cov: 12125 ft: 13771 corp: 11/737b lim: 90 exec/s: 0 rss: 72Mb L: 71/85 MS: 1 ChangeBinInt- 00:06:57.534 [2024-07-23 10:27:45.983513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.534 [2024-07-23 10:27:45.983540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.983586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.534 [2024-07-23 10:27:45.983602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.983657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.534 [2024-07-23 10:27:45.983674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:45.983728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.534 [2024-07-23 10:27:45.983744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.534 #38 NEW cov: 12125 ft: 13793 corp: 12/812b lim: 90 exec/s: 0 rss: 72Mb L: 75/85 MS: 1 InsertRepeatedBytes- 00:06:57.534 [2024-07-23 10:27:46.023603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.534 [2024-07-23 10:27:46.023630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:46.023679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.534 [2024-07-23 10:27:46.023695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:46.023746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.534 [2024-07-23 10:27:46.023759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.534 [2024-07-23 10:27:46.023815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.534 [2024-07-23 10:27:46.023831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.794 #39 NEW cov: 12125 ft: 13807 corp: 13/890b lim: 90 exec/s: 0 rss: 72Mb L: 78/85 MS: 1 ChangeBinInt- 00:06:57.794 [2024-07-23 10:27:46.073603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.794 [2024-07-23 10:27:46.073630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.073665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.794 [2024-07-23 10:27:46.073681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.073734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.794 [2024-07-23 10:27:46.073749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.794 #40 NEW cov: 12125 ft: 13867 corp: 14/959b lim: 90 exec/s: 0 rss: 72Mb L: 69/85 MS: 1 CrossOver- 00:06:57.794 [2024-07-23 10:27:46.123845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.794 [2024-07-23 10:27:46.123871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.123918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.794 [2024-07-23 10:27:46.123935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.124003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.794 [2024-07-23 10:27:46.124020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.124072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.794 [2024-07-23 10:27:46.124089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.794 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:57.794 #41 NEW cov: 12148 ft: 13920 corp: 15/1037b lim: 90 exec/s: 0 rss: 73Mb L: 78/85 MS: 1 ChangeBinInt- 00:06:57.794 [2024-07-23 10:27:46.173982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.794 [2024-07-23 10:27:46.174009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.174073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.794 [2024-07-23 10:27:46.174090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.174142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.794 [2024-07-23 10:27:46.174157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.174211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:57.794 [2024-07-23 10:27:46.174228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.794 #42 NEW cov: 12148 ft: 13934 corp: 16/1113b lim: 90 exec/s: 0 rss: 73Mb L: 76/85 MS: 1 CrossOver- 00:06:57.794 [2024-07-23 10:27:46.223864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.794 [2024-07-23 10:27:46.223890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.223925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.794 [2024-07-23 10:27:46.223942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.794 #48 NEW cov: 12148 ft: 14255 corp: 17/1156b lim: 90 exec/s: 48 rss: 73Mb L: 43/85 MS: 1 CrossOver- 00:06:57.794 [2024-07-23 10:27:46.274132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:57.794 [2024-07-23 10:27:46.274161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.274224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:57.794 [2024-07-23 10:27:46.274240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.794 [2024-07-23 10:27:46.274293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:57.794 [2024-07-23 10:27:46.274310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.054 #49 NEW cov: 12148 ft: 14273 corp: 18/1225b lim: 90 exec/s: 49 rss: 73Mb L: 69/85 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:58.054 [2024-07-23 10:27:46.324432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.054 [2024-07-23 10:27:46.324459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.324496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.054 [2024-07-23 10:27:46.324511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.324562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.054 [2024-07-23 10:27:46.324579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.324629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.054 [2024-07-23 10:27:46.324645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.054 #50 NEW cov: 12148 ft: 14383 corp: 19/1303b lim: 90 exec/s: 50 rss: 73Mb L: 78/85 MS: 1 CopyPart- 00:06:58.054 [2024-07-23 10:27:46.364377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.054 [2024-07-23 10:27:46.364405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.364443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.054 [2024-07-23 10:27:46.364458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.364512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.054 [2024-07-23 10:27:46.364529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.054 #51 NEW cov: 12148 ft: 14445 corp: 20/1372b lim: 90 exec/s: 51 rss: 73Mb L: 69/85 MS: 1 ChangeByte- 00:06:58.054 [2024-07-23 10:27:46.404632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.054 [2024-07-23 10:27:46.404660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.404705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.054 [2024-07-23 10:27:46.404722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.404775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.054 [2024-07-23 10:27:46.404798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.404847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.054 [2024-07-23 10:27:46.404862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.054 #52 NEW cov: 12148 ft: 14447 corp: 21/1452b lim: 90 exec/s: 52 rss: 73Mb L: 80/85 MS: 1 CopyPart- 00:06:58.054 [2024-07-23 10:27:46.454758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.054 [2024-07-23 10:27:46.454789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.454842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.054 [2024-07-23 10:27:46.454859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.454911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.054 [2024-07-23 10:27:46.454927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.454980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.054 [2024-07-23 10:27:46.454996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.054 #53 NEW cov: 12148 ft: 14496 corp: 22/1533b lim: 90 exec/s: 53 rss: 73Mb L: 81/85 MS: 1 InsertRepeatedBytes- 00:06:58.054 [2024-07-23 10:27:46.504765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.054 [2024-07-23 10:27:46.504797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.504837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.054 [2024-07-23 10:27:46.504853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.054 [2024-07-23 10:27:46.504908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.054 [2024-07-23 10:27:46.504924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.054 #54 NEW cov: 12148 ft: 14514 corp: 23/1602b lim: 90 exec/s: 54 rss: 73Mb L: 69/85 MS: 1 ChangeByte- 00:06:58.314 [2024-07-23 10:27:46.555087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.314 [2024-07-23 10:27:46.555114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.555154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.314 [2024-07-23 10:27:46.555170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.555224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.314 [2024-07-23 10:27:46.555241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.555293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.314 [2024-07-23 10:27:46.555311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.314 #55 NEW cov: 12148 ft: 14526 corp: 24/1674b lim: 90 exec/s: 55 rss: 73Mb L: 72/85 MS: 1 InsertByte- 00:06:58.314 [2024-07-23 10:27:46.595021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.314 [2024-07-23 10:27:46.595046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.595092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.314 [2024-07-23 10:27:46.595112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.595164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.314 [2024-07-23 10:27:46.595179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.314 #56 NEW cov: 12148 ft: 14530 corp: 25/1743b lim: 90 exec/s: 56 rss: 73Mb L: 69/85 MS: 1 ChangeBinInt- 00:06:58.314 [2024-07-23 10:27:46.635292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.314 [2024-07-23 10:27:46.635320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.635365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.314 [2024-07-23 10:27:46.635382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.635434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.314 [2024-07-23 10:27:46.635450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.635503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.314 [2024-07-23 10:27:46.635519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.314 #57 NEW cov: 12148 ft: 14548 corp: 26/1828b lim: 90 exec/s: 57 rss: 73Mb L: 85/85 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:58.314 [2024-07-23 10:27:46.685105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.314 [2024-07-23 10:27:46.685133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.685184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.314 [2024-07-23 10:27:46.685200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.314 #58 NEW cov: 12148 ft: 14566 corp: 27/1871b lim: 90 exec/s: 58 rss: 73Mb L: 43/85 MS: 1 ShuffleBytes- 00:06:58.314 [2024-07-23 10:27:46.735561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.314 [2024-07-23 10:27:46.735588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.735636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.314 [2024-07-23 10:27:46.735653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.735706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.314 [2024-07-23 10:27:46.735722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.735775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.314 [2024-07-23 10:27:46.735796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.314 #59 NEW cov: 12148 ft: 14576 corp: 28/1946b lim: 90 exec/s: 59 rss: 73Mb L: 75/85 MS: 1 CrossOver- 00:06:58.314 [2024-07-23 10:27:46.785523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.314 [2024-07-23 10:27:46.785550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.785605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.314 [2024-07-23 10:27:46.785620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.314 [2024-07-23 10:27:46.785674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.314 [2024-07-23 10:27:46.785690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.314 #60 NEW cov: 12148 ft: 14582 corp: 29/2017b lim: 90 exec/s: 60 rss: 73Mb L: 71/85 MS: 1 ChangeBit- 00:06:58.574 [2024-07-23 10:27:46.825803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.574 [2024-07-23 10:27:46.825832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.825880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.574 [2024-07-23 10:27:46.825898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.825950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.574 [2024-07-23 10:27:46.825965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.826019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.574 [2024-07-23 10:27:46.826034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.574 #61 NEW cov: 12148 ft: 14599 corp: 30/2098b lim: 90 exec/s: 61 rss: 74Mb L: 81/85 MS: 1 ShuffleBytes- 00:06:58.574 [2024-07-23 10:27:46.865876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.574 [2024-07-23 10:27:46.865904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.865967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.574 [2024-07-23 10:27:46.865983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.866037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.574 [2024-07-23 10:27:46.866054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.866109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.574 [2024-07-23 10:27:46.866126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.574 #62 NEW cov: 12148 ft: 14622 corp: 31/2187b lim: 90 exec/s: 62 rss: 74Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:06:58.574 [2024-07-23 10:27:46.906066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.574 [2024-07-23 10:27:46.906093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.906135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.574 [2024-07-23 10:27:46.906151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.906202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.574 [2024-07-23 10:27:46.906221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.906275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.574 [2024-07-23 10:27:46.906292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.574 #63 NEW cov: 12148 ft: 14623 corp: 32/2269b lim: 90 exec/s: 63 rss: 74Mb L: 82/89 MS: 1 InsertByte- 00:06:58.574 [2024-07-23 10:27:46.946101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.574 [2024-07-23 10:27:46.946128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.946175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.574 [2024-07-23 10:27:46.946192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.946260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.574 [2024-07-23 10:27:46.946278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.946333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.574 [2024-07-23 10:27:46.946349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.574 #64 NEW cov: 12148 ft: 14632 corp: 33/2348b lim: 90 exec/s: 64 rss: 74Mb L: 79/89 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:58.574 [2024-07-23 10:27:46.996269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.574 [2024-07-23 10:27:46.996297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.996343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.574 [2024-07-23 10:27:46.996360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.996413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.574 [2024-07-23 10:27:46.996428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.574 [2024-07-23 10:27:46.996481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.574 [2024-07-23 10:27:46.996497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.575 #65 NEW cov: 12148 ft: 14636 corp: 34/2420b lim: 90 exec/s: 65 rss: 74Mb L: 72/89 MS: 1 ChangeByte- 00:06:58.575 [2024-07-23 10:27:47.046400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.575 [2024-07-23 10:27:47.046428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.575 [2024-07-23 10:27:47.046474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.575 [2024-07-23 10:27:47.046491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.575 [2024-07-23 10:27:47.046543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.575 [2024-07-23 10:27:47.046559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.575 [2024-07-23 10:27:47.046613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.575 [2024-07-23 10:27:47.046633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.575 #66 NEW cov: 12148 ft: 14644 corp: 35/2499b lim: 90 exec/s: 66 rss: 74Mb L: 79/89 MS: 1 CopyPart- 00:06:58.835 [2024-07-23 10:27:47.086527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.835 [2024-07-23 10:27:47.086555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.086600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.835 [2024-07-23 10:27:47.086616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.086667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.835 [2024-07-23 10:27:47.086680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.086732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.835 [2024-07-23 10:27:47.086748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.835 #67 NEW cov: 12148 ft: 14666 corp: 36/2588b lim: 90 exec/s: 67 rss: 74Mb L: 89/89 MS: 1 ShuffleBytes- 00:06:58.835 [2024-07-23 10:27:47.136368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.835 [2024-07-23 10:27:47.136396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.136436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.835 [2024-07-23 10:27:47.136452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.835 #68 NEW cov: 12148 ft: 14684 corp: 37/2631b lim: 90 exec/s: 68 rss: 74Mb L: 43/89 MS: 1 CrossOver- 00:06:58.835 [2024-07-23 10:27:47.186646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.835 [2024-07-23 10:27:47.186673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.186713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.835 [2024-07-23 10:27:47.186728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.186784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.835 [2024-07-23 10:27:47.186800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.835 #69 NEW cov: 12148 ft: 14705 corp: 38/2700b lim: 90 exec/s: 69 rss: 74Mb L: 69/89 MS: 1 ChangeByte- 00:06:58.835 [2024-07-23 10:27:47.236926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:58.835 [2024-07-23 10:27:47.236953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.237004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:58.835 [2024-07-23 10:27:47.237018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.237085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:58.835 [2024-07-23 10:27:47.237100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.835 [2024-07-23 10:27:47.237154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:58.835 [2024-07-23 10:27:47.237168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.835 #70 NEW cov: 12148 ft: 14710 corp: 39/2783b lim: 90 exec/s: 35 rss: 74Mb L: 83/89 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:58.835 #70 DONE cov: 12148 ft: 14710 corp: 39/2783b lim: 90 exec/s: 35 rss: 74Mb 00:06:58.835 ###### Recommended dictionary. ###### 00:06:58.835 "\001\000\000\000" # Uses: 7 00:06:58.835 ###### End of recommended dictionary. ###### 00:06:58.835 Done 70 runs in 2 second(s) 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.095 10:27:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:06:59.095 [2024-07-23 10:27:47.443475] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:59.095 [2024-07-23 10:27:47.443551] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433342 ] 00:06:59.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.355 [2024-07-23 10:27:47.745321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.355 [2024-07-23 10:27:47.778737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.355 [2024-07-23 10:27:47.831352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.355 [2024-07-23 10:27:47.847675] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:06:59.613 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.613 INFO: Seed: 2796824351 00:06:59.613 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:06:59.613 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:06:59.613 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:59.613 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.613 #2 INITED exec/s: 0 rss: 64Mb 00:06:59.613 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.613 This may also happen if the target rejected all inputs we tried so far 00:06:59.613 [2024-07-23 10:27:47.902786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.613 [2024-07-23 10:27:47.902820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.872 NEW_FUNC[1/693]: 0x4b8e80 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:06:59.872 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.873 #4 NEW cov: 11879 ft: 11880 corp: 2/15b lim: 50 exec/s: 0 rss: 70Mb L: 14/14 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:59.873 [2024-07-23 10:27:48.224211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.873 [2024-07-23 10:27:48.224278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.224363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.873 [2024-07-23 10:27:48.224393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.224484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.873 [2024-07-23 10:27:48.224512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.224593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:59.873 [2024-07-23 10:27:48.224623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.873 #6 NEW cov: 12009 ft: 13335 corp: 3/57b lim: 50 exec/s: 0 rss: 70Mb L: 42/42 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:59.873 [2024-07-23 10:27:48.274048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.873 [2024-07-23 10:27:48.274080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.274118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.873 [2024-07-23 10:27:48.274134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.274187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.873 [2024-07-23 10:27:48.274204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.274259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:59.873 [2024-07-23 10:27:48.274276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.873 #7 NEW cov: 12015 ft: 13606 corp: 4/99b lim: 50 exec/s: 0 rss: 71Mb L: 42/42 MS: 1 ShuffleBytes- 00:06:59.873 [2024-07-23 10:27:48.324025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.873 [2024-07-23 10:27:48.324056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.324093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.873 [2024-07-23 10:27:48.324111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.324170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.873 [2024-07-23 10:27:48.324189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.873 #8 NEW cov: 12100 ft: 14083 corp: 5/132b lim: 50 exec/s: 0 rss: 71Mb L: 33/42 MS: 1 CrossOver- 00:06:59.873 [2024-07-23 10:27:48.364154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:59.873 [2024-07-23 10:27:48.364184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.364237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:59.873 [2024-07-23 10:27:48.364253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.873 [2024-07-23 10:27:48.364312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:59.873 [2024-07-23 10:27:48.364331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.132 #9 NEW cov: 12100 ft: 14119 corp: 6/163b lim: 50 exec/s: 0 rss: 71Mb L: 31/42 MS: 1 InsertRepeatedBytes- 00:07:00.132 [2024-07-23 10:27:48.414447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.132 [2024-07-23 10:27:48.414476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.132 [2024-07-23 10:27:48.414521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.132 [2024-07-23 10:27:48.414537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.414593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.133 [2024-07-23 10:27:48.414610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.414665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.133 [2024-07-23 10:27:48.414682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.133 #10 NEW cov: 12100 ft: 14261 corp: 7/206b lim: 50 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 InsertByte- 00:07:00.133 [2024-07-23 10:27:48.464608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.133 [2024-07-23 10:27:48.464637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.464683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.133 [2024-07-23 10:27:48.464701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.464756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.133 [2024-07-23 10:27:48.464773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.464834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.133 [2024-07-23 10:27:48.464849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.133 #11 NEW cov: 12100 ft: 14425 corp: 8/248b lim: 50 exec/s: 0 rss: 72Mb L: 42/43 MS: 1 ChangeBinInt- 00:07:00.133 [2024-07-23 10:27:48.504251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.133 [2024-07-23 10:27:48.504284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.133 #22 NEW cov: 12100 ft: 14485 corp: 9/263b lim: 50 exec/s: 0 rss: 72Mb L: 15/43 MS: 1 InsertByte- 00:07:00.133 [2024-07-23 10:27:48.544754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.133 [2024-07-23 10:27:48.544788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.544834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.133 [2024-07-23 10:27:48.544850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.544904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.133 [2024-07-23 10:27:48.544920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.133 [2024-07-23 10:27:48.544976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.133 [2024-07-23 10:27:48.544994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.133 #23 NEW cov: 12100 ft: 14565 corp: 10/305b lim: 50 exec/s: 0 rss: 72Mb L: 42/43 MS: 1 CMP- DE: "\017\011\254\260{\232\030\000"- 00:07:00.133 [2024-07-23 10:27:48.584477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.133 [2024-07-23 10:27:48.584505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.133 #24 NEW cov: 12100 ft: 14611 corp: 11/320b lim: 50 exec/s: 0 rss: 72Mb L: 15/43 MS: 1 ChangeBinInt- 00:07:00.392 [2024-07-23 10:27:48.634932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.392 [2024-07-23 10:27:48.634959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.635012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.393 [2024-07-23 10:27:48.635028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.635086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.393 [2024-07-23 10:27:48.635104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.393 #25 NEW cov: 12100 ft: 14638 corp: 12/356b lim: 50 exec/s: 0 rss: 72Mb L: 36/43 MS: 1 CopyPart- 00:07:00.393 [2024-07-23 10:27:48.685158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.393 [2024-07-23 10:27:48.685186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.685233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.393 [2024-07-23 10:27:48.685250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.685305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.393 [2024-07-23 10:27:48.685321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.685379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.393 [2024-07-23 10:27:48.685396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.393 #26 NEW cov: 12100 ft: 14754 corp: 13/398b lim: 50 exec/s: 0 rss: 72Mb L: 42/43 MS: 1 ChangeBinInt- 00:07:00.393 [2024-07-23 10:27:48.724863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.393 [2024-07-23 10:27:48.724890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.393 #27 NEW cov: 12100 ft: 14797 corp: 14/413b lim: 50 exec/s: 0 rss: 72Mb L: 15/43 MS: 1 ChangeBinInt- 00:07:00.393 [2024-07-23 10:27:48.765234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.393 [2024-07-23 10:27:48.765263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.765316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.393 [2024-07-23 10:27:48.765333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.765389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.393 [2024-07-23 10:27:48.765407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.393 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:00.393 #28 NEW cov: 12123 ft: 14864 corp: 15/447b lim: 50 exec/s: 0 rss: 72Mb L: 34/43 MS: 1 InsertRepeatedBytes- 00:07:00.393 [2024-07-23 10:27:48.815418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.393 [2024-07-23 10:27:48.815445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.815484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.393 [2024-07-23 10:27:48.815501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.815556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.393 [2024-07-23 10:27:48.815571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.393 #29 NEW cov: 12123 ft: 14877 corp: 16/480b lim: 50 exec/s: 0 rss: 72Mb L: 33/43 MS: 1 ChangeByte- 00:07:00.393 [2024-07-23 10:27:48.855683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.393 [2024-07-23 10:27:48.855711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.855759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.393 [2024-07-23 10:27:48.855775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.855835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.393 [2024-07-23 10:27:48.855851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.393 [2024-07-23 10:27:48.855906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.393 [2024-07-23 10:27:48.855922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.393 #34 NEW cov: 12123 ft: 14938 corp: 17/526b lim: 50 exec/s: 0 rss: 72Mb L: 46/46 MS: 5 ChangeByte-CopyPart-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:00.653 [2024-07-23 10:27:48.895819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.653 [2024-07-23 10:27:48.895850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:48.895889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.653 [2024-07-23 10:27:48.895907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:48.895962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.653 [2024-07-23 10:27:48.895979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:48.896036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.653 [2024-07-23 10:27:48.896052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.653 #40 NEW cov: 12123 ft: 14950 corp: 18/571b lim: 50 exec/s: 40 rss: 72Mb L: 45/46 MS: 1 InsertRepeatedBytes- 00:07:00.653 [2024-07-23 10:27:48.935764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.653 [2024-07-23 10:27:48.935797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:48.935833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.653 [2024-07-23 10:27:48.935847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:48.935904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.653 [2024-07-23 10:27:48.935920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.653 #41 NEW cov: 12123 ft: 14959 corp: 19/603b lim: 50 exec/s: 41 rss: 72Mb L: 32/46 MS: 1 EraseBytes- 00:07:00.653 [2024-07-23 10:27:48.985617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.653 [2024-07-23 10:27:48.985645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.653 #42 NEW cov: 12123 ft: 14971 corp: 20/622b lim: 50 exec/s: 42 rss: 72Mb L: 19/46 MS: 1 CopyPart- 00:07:00.653 [2024-07-23 10:27:49.036178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.653 [2024-07-23 10:27:49.036206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.036254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.653 [2024-07-23 10:27:49.036272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.036337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.653 [2024-07-23 10:27:49.036354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.036410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.653 [2024-07-23 10:27:49.036427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.653 #43 NEW cov: 12123 ft: 14976 corp: 21/664b lim: 50 exec/s: 43 rss: 72Mb L: 42/46 MS: 1 ChangeBinInt- 00:07:00.653 [2024-07-23 10:27:49.086312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.653 [2024-07-23 10:27:49.086340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.086383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.653 [2024-07-23 10:27:49.086401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.086454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.653 [2024-07-23 10:27:49.086470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.086526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.653 [2024-07-23 10:27:49.086543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.653 #44 NEW cov: 12123 ft: 15003 corp: 22/706b lim: 50 exec/s: 44 rss: 73Mb L: 42/46 MS: 1 ChangeBit- 00:07:00.653 [2024-07-23 10:27:49.136469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.653 [2024-07-23 10:27:49.136498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.136542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.653 [2024-07-23 10:27:49.136557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.136613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.653 [2024-07-23 10:27:49.136629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.653 [2024-07-23 10:27:49.136685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.653 [2024-07-23 10:27:49.136701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.913 #50 NEW cov: 12123 ft: 15016 corp: 23/751b lim: 50 exec/s: 50 rss: 73Mb L: 45/46 MS: 1 CrossOver- 00:07:00.913 [2024-07-23 10:27:49.186165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.913 [2024-07-23 10:27:49.186194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.913 #51 NEW cov: 12123 ft: 15028 corp: 24/766b lim: 50 exec/s: 51 rss: 73Mb L: 15/46 MS: 1 ChangeBit- 00:07:00.913 [2024-07-23 10:27:49.226709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.913 [2024-07-23 10:27:49.226737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.913 [2024-07-23 10:27:49.226790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.913 [2024-07-23 10:27:49.226806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.913 [2024-07-23 10:27:49.226859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.913 [2024-07-23 10:27:49.226873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.913 [2024-07-23 10:27:49.226930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:00.913 [2024-07-23 10:27:49.226945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.913 #52 NEW cov: 12123 ft: 15071 corp: 25/808b lim: 50 exec/s: 52 rss: 73Mb L: 42/46 MS: 1 CopyPart- 00:07:00.913 [2024-07-23 10:27:49.276400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.913 [2024-07-23 10:27:49.276430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.913 #53 NEW cov: 12123 ft: 15082 corp: 26/823b lim: 50 exec/s: 53 rss: 73Mb L: 15/46 MS: 1 ChangeByte- 00:07:00.913 [2024-07-23 10:27:49.316522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.913 [2024-07-23 10:27:49.316550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.913 #54 NEW cov: 12123 ft: 15086 corp: 27/838b lim: 50 exec/s: 54 rss: 73Mb L: 15/46 MS: 1 ChangeByte- 00:07:00.913 [2024-07-23 10:27:49.366960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.913 [2024-07-23 10:27:49.366989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.913 [2024-07-23 10:27:49.367026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.913 [2024-07-23 10:27:49.367042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.913 [2024-07-23 10:27:49.367099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.913 [2024-07-23 10:27:49.367115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.913 #55 NEW cov: 12123 ft: 15132 corp: 28/869b lim: 50 exec/s: 55 rss: 73Mb L: 31/46 MS: 1 ChangeBit- 00:07:00.913 [2024-07-23 10:27:49.407050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:00.913 [2024-07-23 10:27:49.407078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.913 [2024-07-23 10:27:49.407123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:00.913 [2024-07-23 10:27:49.407140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.913 [2024-07-23 10:27:49.407197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:00.913 [2024-07-23 10:27:49.407213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.172 #56 NEW cov: 12123 ft: 15133 corp: 29/901b lim: 50 exec/s: 56 rss: 73Mb L: 32/46 MS: 1 CrossOver- 00:07:01.172 [2024-07-23 10:27:49.457343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.172 [2024-07-23 10:27:49.457371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.172 [2024-07-23 10:27:49.457434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:01.172 [2024-07-23 10:27:49.457450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.172 [2024-07-23 10:27:49.457505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:01.172 [2024-07-23 10:27:49.457521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.172 [2024-07-23 10:27:49.457579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:01.172 [2024-07-23 10:27:49.457597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.172 #57 NEW cov: 12123 ft: 15139 corp: 30/943b lim: 50 exec/s: 57 rss: 73Mb L: 42/46 MS: 1 PersAutoDict- DE: "\017\011\254\260{\232\030\000"- 00:07:01.172 [2024-07-23 10:27:49.497511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.172 [2024-07-23 10:27:49.497541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.172 [2024-07-23 10:27:49.497579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:01.172 [2024-07-23 10:27:49.497595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.172 [2024-07-23 10:27:49.497648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:01.172 [2024-07-23 10:27:49.497663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.172 [2024-07-23 10:27:49.497717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:01.173 [2024-07-23 10:27:49.497733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.173 [2024-07-23 10:27:49.547635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.173 [2024-07-23 10:27:49.547664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.173 [2024-07-23 10:27:49.547710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:01.173 [2024-07-23 10:27:49.547726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.173 [2024-07-23 10:27:49.547791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:01.173 [2024-07-23 10:27:49.547809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.173 [2024-07-23 10:27:49.547861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:01.173 [2024-07-23 10:27:49.547878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.173 #59 NEW cov: 12123 ft: 15187 corp: 31/984b lim: 50 exec/s: 59 rss: 73Mb L: 41/46 MS: 2 InsertRepeatedBytes-InsertByte- 00:07:01.173 [2024-07-23 10:27:49.587298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.173 [2024-07-23 10:27:49.587328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.173 #60 NEW cov: 12123 ft: 15242 corp: 32/999b lim: 50 exec/s: 60 rss: 73Mb L: 15/46 MS: 1 ChangeBinInt- 00:07:01.173 [2024-07-23 10:27:49.627861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.173 [2024-07-23 10:27:49.627890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.173 [2024-07-23 10:27:49.627929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:01.173 [2024-07-23 10:27:49.627943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.173 [2024-07-23 10:27:49.627999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:01.173 [2024-07-23 10:27:49.628016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.173 [2024-07-23 10:27:49.628071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:01.173 [2024-07-23 10:27:49.628087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.173 #61 NEW cov: 12123 ft: 15243 corp: 33/1044b lim: 50 exec/s: 61 rss: 73Mb L: 45/46 MS: 1 ChangeBinInt- 00:07:01.432 [2024-07-23 10:27:49.677718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.432 [2024-07-23 10:27:49.677747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.432 [2024-07-23 10:27:49.677824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:01.432 [2024-07-23 10:27:49.677841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.432 #62 NEW cov: 12123 ft: 15490 corp: 34/1073b lim: 50 exec/s: 62 rss: 73Mb L: 29/46 MS: 1 InsertRepeatedBytes- 00:07:01.432 [2024-07-23 10:27:49.727686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.432 [2024-07-23 10:27:49.727715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.432 #63 NEW cov: 12123 ft: 15500 corp: 35/1088b lim: 50 exec/s: 63 rss: 73Mb L: 15/46 MS: 1 ChangeBit- 00:07:01.433 [2024-07-23 10:27:49.767781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.433 [2024-07-23 10:27:49.767811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.433 #69 NEW cov: 12123 ft: 15536 corp: 36/1103b lim: 50 exec/s: 69 rss: 73Mb L: 15/46 MS: 1 ChangeBinInt- 00:07:01.433 [2024-07-23 10:27:49.817917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.433 [2024-07-23 10:27:49.817946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.433 #70 NEW cov: 12123 ft: 15548 corp: 37/1118b lim: 50 exec/s: 70 rss: 73Mb L: 15/46 MS: 1 ShuffleBytes- 00:07:01.433 [2024-07-23 10:27:49.868091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:01.433 [2024-07-23 10:27:49.868119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.433 #71 NEW cov: 12123 ft: 15571 corp: 38/1137b lim: 50 exec/s: 35 rss: 74Mb L: 19/46 MS: 1 ChangeBit- 00:07:01.433 #71 DONE cov: 12123 ft: 15571 corp: 38/1137b lim: 50 exec/s: 35 rss: 74Mb 00:07:01.433 ###### Recommended dictionary. ###### 00:07:01.433 "\017\011\254\260{\232\030\000" # Uses: 2 00:07:01.433 ###### End of recommended dictionary. ###### 00:07:01.433 Done 71 runs in 2 second(s) 00:07:01.692 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.692 10:27:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.692 10:27:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.692 10:27:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:01.692 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:01.692 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.692 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.693 10:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:01.693 [2024-07-23 10:27:50.074205] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:01.693 [2024-07-23 10:27:50.074285] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433708 ] 00:07:01.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.953 [2024-07-23 10:27:50.377498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.953 [2024-07-23 10:27:50.408541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.214 [2024-07-23 10:27:50.461486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.214 [2024-07-23 10:27:50.477828] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:02.214 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.214 INFO: Seed: 1131852574 00:07:02.214 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:02.214 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:02.214 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:02.214 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.214 #2 INITED exec/s: 0 rss: 64Mb 00:07:02.214 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.214 This may also happen if the target rejected all inputs we tried so far 00:07:02.214 [2024-07-23 10:27:50.555415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.214 [2024-07-23 10:27:50.555466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.214 [2024-07-23 10:27:50.555611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.214 [2024-07-23 10:27:50.555642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.214 [2024-07-23 10:27:50.555783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.214 [2024-07-23 10:27:50.555807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.474 NEW_FUNC[1/693]: 0x4bb140 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:02.474 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.474 #7 NEW cov: 11905 ft: 11906 corp: 2/68b lim: 85 exec/s: 0 rss: 70Mb L: 67/67 MS: 5 CrossOver-CrossOver-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:02.474 [2024-07-23 10:27:50.895571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.474 [2024-07-23 10:27:50.895612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.474 [2024-07-23 10:27:50.895691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.474 [2024-07-23 10:27:50.895710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.474 [2024-07-23 10:27:50.895818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.474 [2024-07-23 10:27:50.895841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.474 #8 NEW cov: 12035 ft: 12513 corp: 3/135b lim: 85 exec/s: 0 rss: 70Mb L: 67/67 MS: 1 ChangeByte- 00:07:02.474 [2024-07-23 10:27:50.965486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.474 [2024-07-23 10:27:50.965519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.474 [2024-07-23 10:27:50.965611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.474 [2024-07-23 10:27:50.965630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.733 #14 NEW cov: 12041 ft: 13216 corp: 4/172b lim: 85 exec/s: 0 rss: 70Mb L: 37/67 MS: 1 EraseBytes- 00:07:02.733 [2024-07-23 10:27:51.016341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.733 [2024-07-23 10:27:51.016371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.016449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.733 [2024-07-23 10:27:51.016469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.016515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.733 [2024-07-23 10:27:51.016533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.016624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:02.733 [2024-07-23 10:27:51.016644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.733 #15 NEW cov: 12126 ft: 13866 corp: 5/246b lim: 85 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:07:02.733 [2024-07-23 10:27:51.086621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.733 [2024-07-23 10:27:51.086652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.086727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.733 [2024-07-23 10:27:51.086746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.086812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.733 [2024-07-23 10:27:51.086833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.086928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:02.733 [2024-07-23 10:27:51.086949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.733 #21 NEW cov: 12126 ft: 13932 corp: 6/327b lim: 85 exec/s: 0 rss: 71Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:07:02.733 [2024-07-23 10:27:51.156412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.733 [2024-07-23 10:27:51.156441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.156508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.733 [2024-07-23 10:27:51.156526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.156605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.733 [2024-07-23 10:27:51.156627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.733 #22 NEW cov: 12126 ft: 14036 corp: 7/388b lim: 85 exec/s: 0 rss: 71Mb L: 61/81 MS: 1 EraseBytes- 00:07:02.733 [2024-07-23 10:27:51.226941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.733 [2024-07-23 10:27:51.226971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.227091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.733 [2024-07-23 10:27:51.227108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.227201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.733 [2024-07-23 10:27:51.227223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.733 [2024-07-23 10:27:51.227309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:02.733 [2024-07-23 10:27:51.227328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.993 #23 NEW cov: 12126 ft: 14194 corp: 8/462b lim: 85 exec/s: 0 rss: 71Mb L: 74/81 MS: 1 EraseBytes- 00:07:02.993 [2024-07-23 10:27:51.296908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.993 [2024-07-23 10:27:51.296939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.297002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.993 [2024-07-23 10:27:51.297021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.297080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.993 [2024-07-23 10:27:51.297100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.993 #24 NEW cov: 12126 ft: 14279 corp: 9/523b lim: 85 exec/s: 0 rss: 72Mb L: 61/81 MS: 1 ChangeBit- 00:07:02.993 [2024-07-23 10:27:51.357020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.993 [2024-07-23 10:27:51.357050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.357102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.993 [2024-07-23 10:27:51.357118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.357182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.993 [2024-07-23 10:27:51.357200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.993 #30 NEW cov: 12126 ft: 14302 corp: 10/584b lim: 85 exec/s: 0 rss: 72Mb L: 61/81 MS: 1 ShuffleBytes- 00:07:02.993 [2024-07-23 10:27:51.407148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.993 [2024-07-23 10:27:51.407179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.407257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.993 [2024-07-23 10:27:51.407274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.407338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.993 [2024-07-23 10:27:51.407361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.993 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:02.993 #31 NEW cov: 12149 ft: 14373 corp: 11/645b lim: 85 exec/s: 0 rss: 72Mb L: 61/81 MS: 1 ChangeBinInt- 00:07:02.993 [2024-07-23 10:27:51.457424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:02.993 [2024-07-23 10:27:51.457451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.457518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:02.993 [2024-07-23 10:27:51.457534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.993 [2024-07-23 10:27:51.457596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:02.993 [2024-07-23 10:27:51.457616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.993 #32 NEW cov: 12149 ft: 14383 corp: 12/710b lim: 85 exec/s: 0 rss: 72Mb L: 65/81 MS: 1 CrossOver- 00:07:03.253 [2024-07-23 10:27:51.517195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.253 [2024-07-23 10:27:51.517226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.517318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.253 [2024-07-23 10:27:51.517338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.253 #33 NEW cov: 12149 ft: 14423 corp: 13/748b lim: 85 exec/s: 33 rss: 72Mb L: 38/81 MS: 1 InsertByte- 00:07:03.253 [2024-07-23 10:27:51.567801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.253 [2024-07-23 10:27:51.567831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.567898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.253 [2024-07-23 10:27:51.567915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.567980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.253 [2024-07-23 10:27:51.567999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.253 #34 NEW cov: 12149 ft: 14468 corp: 14/813b lim: 85 exec/s: 34 rss: 72Mb L: 65/81 MS: 1 ChangeBinInt- 00:07:03.253 [2024-07-23 10:27:51.628222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.253 [2024-07-23 10:27:51.628252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.628326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.253 [2024-07-23 10:27:51.628344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.628403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.253 [2024-07-23 10:27:51.628426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.628524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:03.253 [2024-07-23 10:27:51.628545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.253 #35 NEW cov: 12149 ft: 14490 corp: 15/887b lim: 85 exec/s: 35 rss: 72Mb L: 74/81 MS: 1 CMP- DE: "\001\000\002\000"- 00:07:03.253 [2024-07-23 10:27:51.678472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.253 [2024-07-23 10:27:51.678501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.678572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.253 [2024-07-23 10:27:51.678591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.678668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.253 [2024-07-23 10:27:51.678685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.678781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:03.253 [2024-07-23 10:27:51.678801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.253 #36 NEW cov: 12149 ft: 14514 corp: 16/961b lim: 85 exec/s: 36 rss: 72Mb L: 74/81 MS: 1 CrossOver- 00:07:03.253 [2024-07-23 10:27:51.748410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.253 [2024-07-23 10:27:51.748440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.748502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.253 [2024-07-23 10:27:51.748518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.253 [2024-07-23 10:27:51.748598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.253 [2024-07-23 10:27:51.748619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.513 #37 NEW cov: 12149 ft: 14572 corp: 17/1028b lim: 85 exec/s: 37 rss: 72Mb L: 67/81 MS: 1 ChangeByte- 00:07:03.513 [2024-07-23 10:27:51.788298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.513 [2024-07-23 10:27:51.788326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.788388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.513 [2024-07-23 10:27:51.788408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.788454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.513 [2024-07-23 10:27:51.788473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.513 #38 NEW cov: 12149 ft: 14687 corp: 18/1095b lim: 85 exec/s: 38 rss: 72Mb L: 67/81 MS: 1 ChangeByte- 00:07:03.513 [2024-07-23 10:27:51.849084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.513 [2024-07-23 10:27:51.849114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.849190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.513 [2024-07-23 10:27:51.849210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.849282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.513 [2024-07-23 10:27:51.849300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.849385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:03.513 [2024-07-23 10:27:51.849403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.513 #39 NEW cov: 12149 ft: 14703 corp: 19/1174b lim: 85 exec/s: 39 rss: 72Mb L: 79/81 MS: 1 InsertRepeatedBytes- 00:07:03.513 [2024-07-23 10:27:51.909167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.513 [2024-07-23 10:27:51.909194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.909259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.513 [2024-07-23 10:27:51.909277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.909357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.513 [2024-07-23 10:27:51.909374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.909458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:03.513 [2024-07-23 10:27:51.909480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.513 #40 NEW cov: 12149 ft: 14729 corp: 20/1248b lim: 85 exec/s: 40 rss: 72Mb L: 74/81 MS: 1 ChangeBinInt- 00:07:03.513 [2024-07-23 10:27:51.969442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.513 [2024-07-23 10:27:51.969469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.969544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.513 [2024-07-23 10:27:51.969567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.513 [2024-07-23 10:27:51.969668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.514 [2024-07-23 10:27:51.969688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.514 [2024-07-23 10:27:51.969781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:03.514 [2024-07-23 10:27:51.969801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.514 #41 NEW cov: 12149 ft: 14741 corp: 21/1328b lim: 85 exec/s: 41 rss: 72Mb L: 80/81 MS: 1 InsertRepeatedBytes- 00:07:03.773 [2024-07-23 10:27:52.019300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.773 [2024-07-23 10:27:52.019327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.019405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.773 [2024-07-23 10:27:52.019426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.019489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.773 [2024-07-23 10:27:52.019507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.773 #42 NEW cov: 12149 ft: 14755 corp: 22/1386b lim: 85 exec/s: 42 rss: 72Mb L: 58/81 MS: 1 EraseBytes- 00:07:03.773 [2024-07-23 10:27:52.069428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.773 [2024-07-23 10:27:52.069457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.069519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.773 [2024-07-23 10:27:52.069538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.069594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.773 [2024-07-23 10:27:52.069613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.773 #43 NEW cov: 12149 ft: 14764 corp: 23/1443b lim: 85 exec/s: 43 rss: 72Mb L: 57/81 MS: 1 EraseBytes- 00:07:03.773 [2024-07-23 10:27:52.119871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.773 [2024-07-23 10:27:52.119899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.119965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.773 [2024-07-23 10:27:52.119982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.120047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.773 [2024-07-23 10:27:52.120063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.120155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:03.773 [2024-07-23 10:27:52.120177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.773 #44 NEW cov: 12149 ft: 14788 corp: 24/1527b lim: 85 exec/s: 44 rss: 72Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:07:03.773 [2024-07-23 10:27:52.169782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.773 [2024-07-23 10:27:52.169809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.169883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.773 [2024-07-23 10:27:52.169899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.169953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.773 [2024-07-23 10:27:52.169972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.773 #45 NEW cov: 12149 ft: 14809 corp: 25/1585b lim: 85 exec/s: 45 rss: 73Mb L: 58/84 MS: 1 CrossOver- 00:07:03.773 [2024-07-23 10:27:52.230319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:03.773 [2024-07-23 10:27:52.230346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.230406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:03.773 [2024-07-23 10:27:52.230422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.230496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:03.773 [2024-07-23 10:27:52.230514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.773 [2024-07-23 10:27:52.230604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:03.773 [2024-07-23 10:27:52.230625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.773 #46 NEW cov: 12149 ft: 14846 corp: 26/1661b lim: 85 exec/s: 46 rss: 73Mb L: 76/84 MS: 1 CMP- DE: "\377\377"- 00:07:04.032 [2024-07-23 10:27:52.290542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:04.032 [2024-07-23 10:27:52.290573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.032 [2024-07-23 10:27:52.290643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:04.032 [2024-07-23 10:27:52.290661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.032 [2024-07-23 10:27:52.290712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:04.032 [2024-07-23 10:27:52.290734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.032 [2024-07-23 10:27:52.290826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:04.032 [2024-07-23 10:27:52.290844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.032 #47 NEW cov: 12149 ft: 14868 corp: 27/1735b lim: 85 exec/s: 47 rss: 73Mb L: 74/84 MS: 1 ChangeBinInt- 00:07:04.032 [2024-07-23 10:27:52.340825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:04.032 [2024-07-23 10:27:52.340853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.032 [2024-07-23 10:27:52.340926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:04.032 [2024-07-23 10:27:52.340946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.032 [2024-07-23 10:27:52.341009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:04.032 [2024-07-23 10:27:52.341026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.032 [2024-07-23 10:27:52.341122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:04.032 [2024-07-23 10:27:52.341143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.033 #48 NEW cov: 12149 ft: 14878 corp: 28/1808b lim: 85 exec/s: 48 rss: 73Mb L: 73/84 MS: 1 CopyPart- 00:07:04.033 [2024-07-23 10:27:52.391014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:04.033 [2024-07-23 10:27:52.391048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.391126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:04.033 [2024-07-23 10:27:52.391147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.391200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:04.033 [2024-07-23 10:27:52.391223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.391313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:04.033 [2024-07-23 10:27:52.391333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.033 #49 NEW cov: 12149 ft: 14923 corp: 29/1886b lim: 85 exec/s: 49 rss: 73Mb L: 78/84 MS: 1 PersAutoDict- DE: "\001\000\002\000"- 00:07:04.033 [2024-07-23 10:27:52.461230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:04.033 [2024-07-23 10:27:52.461263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.461338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:04.033 [2024-07-23 10:27:52.461355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.461424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:04.033 [2024-07-23 10:27:52.461446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.461536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:04.033 [2024-07-23 10:27:52.461557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.033 #50 NEW cov: 12149 ft: 14924 corp: 30/1969b lim: 85 exec/s: 50 rss: 73Mb L: 83/84 MS: 1 InsertRepeatedBytes- 00:07:04.033 [2024-07-23 10:27:52.511423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:04.033 [2024-07-23 10:27:52.511453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.511516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:04.033 [2024-07-23 10:27:52.511533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.511616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:04.033 [2024-07-23 10:27:52.511634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.033 [2024-07-23 10:27:52.511732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:04.033 [2024-07-23 10:27:52.511750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.292 #51 NEW cov: 12149 ft: 14929 corp: 31/2047b lim: 85 exec/s: 25 rss: 73Mb L: 78/84 MS: 1 ChangeByte- 00:07:04.292 #51 DONE cov: 12149 ft: 14929 corp: 31/2047b lim: 85 exec/s: 25 rss: 73Mb 00:07:04.292 ###### Recommended dictionary. ###### 00:07:04.292 "\001\000\002\000" # Uses: 1 00:07:04.292 "\377\377" # Uses: 0 00:07:04.292 ###### End of recommended dictionary. ###### 00:07:04.292 Done 51 runs in 2 second(s) 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.292 10:27:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:04.293 [2024-07-23 10:27:52.709417] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:04.293 [2024-07-23 10:27:52.709490] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434079 ] 00:07:04.293 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.552 [2024-07-23 10:27:52.977269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.552 [2024-07-23 10:27:53.006491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.811 [2024-07-23 10:27:53.058979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.811 [2024-07-23 10:27:53.075308] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:04.811 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.811 INFO: Seed: 3728842946 00:07:04.811 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:04.811 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:04.811 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:04.811 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.811 #2 INITED exec/s: 0 rss: 63Mb 00:07:04.811 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.811 This may also happen if the target rejected all inputs we tried so far 00:07:04.811 [2024-07-23 10:27:53.130375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:04.811 [2024-07-23 10:27:53.130407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.070 NEW_FUNC[1/692]: 0x4be370 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:05.070 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:05.070 #4 NEW cov: 11837 ft: 11837 corp: 2/10b lim: 25 exec/s: 0 rss: 70Mb L: 9/9 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:05.070 [2024-07-23 10:27:53.471316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.070 [2024-07-23 10:27:53.471362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.070 [2024-07-23 10:27:53.471436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.071 [2024-07-23 10:27:53.471452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.071 #7 NEW cov: 11968 ft: 12765 corp: 3/22b lim: 25 exec/s: 0 rss: 71Mb L: 12/12 MS: 3 CrossOver-ChangeByte-CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:05.071 [2024-07-23 10:27:53.521361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.071 [2024-07-23 10:27:53.521389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.071 [2024-07-23 10:27:53.521429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.071 [2024-07-23 10:27:53.521446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.071 #8 NEW cov: 11974 ft: 12992 corp: 4/35b lim: 25 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 CrossOver- 00:07:05.071 [2024-07-23 10:27:53.561473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.071 [2024-07-23 10:27:53.561500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.071 [2024-07-23 10:27:53.561538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.071 [2024-07-23 10:27:53.561555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.330 #9 NEW cov: 12059 ft: 13355 corp: 5/48b lim: 25 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:05.330 [2024-07-23 10:27:53.611988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.330 [2024-07-23 10:27:53.612015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.330 [2024-07-23 10:27:53.612073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.330 [2024-07-23 10:27:53.612089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.330 [2024-07-23 10:27:53.612143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.330 [2024-07-23 10:27:53.612160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.330 [2024-07-23 10:27:53.612213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:05.330 [2024-07-23 10:27:53.612228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.330 [2024-07-23 10:27:53.612282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:05.330 [2024-07-23 10:27:53.612297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:05.330 #10 NEW cov: 12059 ft: 13926 corp: 6/73b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:05.330 [2024-07-23 10:27:53.662153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.330 [2024-07-23 10:27:53.662184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.330 [2024-07-23 10:27:53.662235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.331 [2024-07-23 10:27:53.662251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.331 [2024-07-23 10:27:53.662309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.331 [2024-07-23 10:27:53.662324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.331 [2024-07-23 10:27:53.662377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:05.331 [2024-07-23 10:27:53.662392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.331 [2024-07-23 10:27:53.662446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:05.331 [2024-07-23 10:27:53.662462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:05.331 #16 NEW cov: 12059 ft: 13979 corp: 7/98b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 CopyPart- 00:07:05.331 [2024-07-23 10:27:53.712026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.331 [2024-07-23 10:27:53.712053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.331 [2024-07-23 10:27:53.712110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.331 [2024-07-23 10:27:53.712126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.331 [2024-07-23 10:27:53.712183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.331 [2024-07-23 10:27:53.712199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.331 #17 NEW cov: 12059 ft: 14309 corp: 8/115b lim: 25 exec/s: 0 rss: 72Mb L: 17/25 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:05.331 [2024-07-23 10:27:53.751895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.331 [2024-07-23 10:27:53.751923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.331 #21 NEW cov: 12059 ft: 14456 corp: 9/120b lim: 25 exec/s: 0 rss: 72Mb L: 5/25 MS: 4 CrossOver-CopyPart-CMP-CrossOver- DE: "\377\011"- 00:07:05.331 [2024-07-23 10:27:53.792124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.331 [2024-07-23 10:27:53.792151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.331 [2024-07-23 10:27:53.792192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.331 [2024-07-23 10:27:53.792208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.331 #22 NEW cov: 12059 ft: 14530 corp: 10/133b lim: 25 exec/s: 0 rss: 72Mb L: 13/25 MS: 1 ChangeByte- 00:07:05.590 [2024-07-23 10:27:53.842527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.590 [2024-07-23 10:27:53.842557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.590 [2024-07-23 10:27:53.842604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.590 [2024-07-23 10:27:53.842620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.590 [2024-07-23 10:27:53.842678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.590 [2024-07-23 10:27:53.842694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.590 [2024-07-23 10:27:53.842753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:05.590 [2024-07-23 10:27:53.842770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.590 #23 NEW cov: 12059 ft: 14659 corp: 11/155b lim: 25 exec/s: 0 rss: 72Mb L: 22/25 MS: 1 CopyPart- 00:07:05.590 [2024-07-23 10:27:53.892682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.590 [2024-07-23 10:27:53.892710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.590 [2024-07-23 10:27:53.892757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.590 [2024-07-23 10:27:53.892774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.590 [2024-07-23 10:27:53.892833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.590 [2024-07-23 10:27:53.892848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.590 [2024-07-23 10:27:53.892905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:05.590 [2024-07-23 10:27:53.892920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.591 #24 NEW cov: 12059 ft: 14674 corp: 12/179b lim: 25 exec/s: 0 rss: 72Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:07:05.591 [2024-07-23 10:27:53.932415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.591 [2024-07-23 10:27:53.932444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.591 #25 NEW cov: 12059 ft: 14716 corp: 13/184b lim: 25 exec/s: 0 rss: 72Mb L: 5/25 MS: 1 ChangeBit- 00:07:05.591 [2024-07-23 10:27:53.982814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.591 [2024-07-23 10:27:53.982843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.591 [2024-07-23 10:27:53.982892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.591 [2024-07-23 10:27:53.982909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.591 [2024-07-23 10:27:53.982970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.591 [2024-07-23 10:27:53.982988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.591 #26 NEW cov: 12059 ft: 14775 corp: 14/201b lim: 25 exec/s: 0 rss: 72Mb L: 17/25 MS: 1 CrossOver- 00:07:05.591 [2024-07-23 10:27:54.023011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.591 [2024-07-23 10:27:54.023040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.591 [2024-07-23 10:27:54.023093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.591 [2024-07-23 10:27:54.023110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.591 [2024-07-23 10:27:54.023164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.591 [2024-07-23 10:27:54.023180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.591 [2024-07-23 10:27:54.023238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:05.591 [2024-07-23 10:27:54.023253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.591 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:05.591 #31 NEW cov: 12082 ft: 14809 corp: 15/225b lim: 25 exec/s: 0 rss: 72Mb L: 24/25 MS: 5 ChangeBit-ShuffleBytes-ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:07:05.591 [2024-07-23 10:27:54.062774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.591 [2024-07-23 10:27:54.062807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.591 #32 NEW cov: 12082 ft: 14821 corp: 16/233b lim: 25 exec/s: 0 rss: 72Mb L: 8/25 MS: 1 CrossOver- 00:07:05.851 [2024-07-23 10:27:54.103107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.851 [2024-07-23 10:27:54.103134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.103183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.851 [2024-07-23 10:27:54.103200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.103257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.851 [2024-07-23 10:27:54.103273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.851 #33 NEW cov: 12082 ft: 14864 corp: 17/250b lim: 25 exec/s: 33 rss: 72Mb L: 17/25 MS: 1 ChangeByte- 00:07:05.851 [2024-07-23 10:27:54.153377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.851 [2024-07-23 10:27:54.153404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.153453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.851 [2024-07-23 10:27:54.153470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.153526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.851 [2024-07-23 10:27:54.153540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.153598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:05.851 [2024-07-23 10:27:54.153614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.851 #34 NEW cov: 12082 ft: 14919 corp: 18/272b lim: 25 exec/s: 34 rss: 72Mb L: 22/25 MS: 1 ShuffleBytes- 00:07:05.851 [2024-07-23 10:27:54.203181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.851 [2024-07-23 10:27:54.203208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.851 #37 NEW cov: 12082 ft: 14933 corp: 19/280b lim: 25 exec/s: 37 rss: 72Mb L: 8/25 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:05.851 [2024-07-23 10:27:54.243530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.851 [2024-07-23 10:27:54.243556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.243604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.851 [2024-07-23 10:27:54.243620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.243682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.851 [2024-07-23 10:27:54.243698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.851 #38 NEW cov: 12082 ft: 14947 corp: 20/299b lim: 25 exec/s: 38 rss: 72Mb L: 19/25 MS: 1 EraseBytes- 00:07:05.851 [2024-07-23 10:27:54.283386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.851 [2024-07-23 10:27:54.283413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.851 #39 NEW cov: 12082 ft: 14965 corp: 21/306b lim: 25 exec/s: 39 rss: 72Mb L: 7/25 MS: 1 EraseBytes- 00:07:05.851 [2024-07-23 10:27:54.323775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:05.851 [2024-07-23 10:27:54.323808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.323850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:05.851 [2024-07-23 10:27:54.323867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.851 [2024-07-23 10:27:54.323922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:05.851 [2024-07-23 10:27:54.323936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.851 #40 NEW cov: 12082 ft: 14999 corp: 22/324b lim: 25 exec/s: 40 rss: 72Mb L: 18/25 MS: 1 InsertByte- 00:07:06.111 [2024-07-23 10:27:54.364120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.111 [2024-07-23 10:27:54.364148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.364203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.111 [2024-07-23 10:27:54.364219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.364273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.111 [2024-07-23 10:27:54.364289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.364340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:06.111 [2024-07-23 10:27:54.364356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.364410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:06.111 [2024-07-23 10:27:54.364425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.111 #41 NEW cov: 12082 ft: 15037 corp: 23/349b lim: 25 exec/s: 41 rss: 72Mb L: 25/25 MS: 1 CopyPart- 00:07:06.111 [2024-07-23 10:27:54.414241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.111 [2024-07-23 10:27:54.414268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.414326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.111 [2024-07-23 10:27:54.414341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.414398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.111 [2024-07-23 10:27:54.414414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.414470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:06.111 [2024-07-23 10:27:54.414486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.414540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:06.111 [2024-07-23 10:27:54.414555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.111 #42 NEW cov: 12082 ft: 15041 corp: 24/374b lim: 25 exec/s: 42 rss: 72Mb L: 25/25 MS: 1 CopyPart- 00:07:06.111 [2024-07-23 10:27:54.464360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.111 [2024-07-23 10:27:54.464388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.464459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.111 [2024-07-23 10:27:54.464476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.464532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.111 [2024-07-23 10:27:54.464547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.464600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:06.111 [2024-07-23 10:27:54.464617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.464671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:06.111 [2024-07-23 10:27:54.464687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.111 #43 NEW cov: 12082 ft: 15046 corp: 25/399b lim: 25 exec/s: 43 rss: 72Mb L: 25/25 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:06.111 [2024-07-23 10:27:54.503993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.111 [2024-07-23 10:27:54.504021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.111 #44 NEW cov: 12082 ft: 15058 corp: 26/407b lim: 25 exec/s: 44 rss: 72Mb L: 8/25 MS: 1 ChangeBinInt- 00:07:06.111 [2024-07-23 10:27:54.554395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.111 [2024-07-23 10:27:54.554421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.554484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.111 [2024-07-23 10:27:54.554501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.111 [2024-07-23 10:27:54.554560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.111 [2024-07-23 10:27:54.554577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.111 #45 NEW cov: 12082 ft: 15084 corp: 27/423b lim: 25 exec/s: 45 rss: 72Mb L: 16/25 MS: 1 InsertRepeatedBytes- 00:07:06.111 [2024-07-23 10:27:54.594256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.111 [2024-07-23 10:27:54.594283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.370 #46 NEW cov: 12082 ft: 15126 corp: 28/428b lim: 25 exec/s: 46 rss: 73Mb L: 5/25 MS: 1 EraseBytes- 00:07:06.370 [2024-07-23 10:27:54.644921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.371 [2024-07-23 10:27:54.644948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.645003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.371 [2024-07-23 10:27:54.645019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.645072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.371 [2024-07-23 10:27:54.645103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.645156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:06.371 [2024-07-23 10:27:54.645172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.645228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:06.371 [2024-07-23 10:27:54.645244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.371 #47 NEW cov: 12082 ft: 15142 corp: 29/453b lim: 25 exec/s: 47 rss: 73Mb L: 25/25 MS: 1 CMP- DE: "\000\005"- 00:07:06.371 [2024-07-23 10:27:54.694649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.371 [2024-07-23 10:27:54.694676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.694717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.371 [2024-07-23 10:27:54.694732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.371 #51 NEW cov: 12082 ft: 15145 corp: 30/465b lim: 25 exec/s: 51 rss: 73Mb L: 12/25 MS: 4 CrossOver-InsertRepeatedBytes-ChangeBinInt-CMP- DE: "\001\030\232y\366\036\312\370"- 00:07:06.371 [2024-07-23 10:27:54.744936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.371 [2024-07-23 10:27:54.744962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.745003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.371 [2024-07-23 10:27:54.745020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.745077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.371 [2024-07-23 10:27:54.745094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.371 #52 NEW cov: 12082 ft: 15158 corp: 31/481b lim: 25 exec/s: 52 rss: 73Mb L: 16/25 MS: 1 ShuffleBytes- 00:07:06.371 [2024-07-23 10:27:54.794933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.371 [2024-07-23 10:27:54.794960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.795002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.371 [2024-07-23 10:27:54.795018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.371 #53 NEW cov: 12082 ft: 15165 corp: 32/494b lim: 25 exec/s: 53 rss: 73Mb L: 13/25 MS: 1 EraseBytes- 00:07:06.371 [2024-07-23 10:27:54.835363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.371 [2024-07-23 10:27:54.835390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.835449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.371 [2024-07-23 10:27:54.835464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.835535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.371 [2024-07-23 10:27:54.835552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.835607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:06.371 [2024-07-23 10:27:54.835623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.371 [2024-07-23 10:27:54.835680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:06.371 [2024-07-23 10:27:54.835697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.631 #54 NEW cov: 12082 ft: 15188 corp: 33/519b lim: 25 exec/s: 54 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:07:06.631 [2024-07-23 10:27:54.885302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.631 [2024-07-23 10:27:54.885329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:54.885367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.631 [2024-07-23 10:27:54.885382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:54.885439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.631 [2024-07-23 10:27:54.885456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.631 #55 NEW cov: 12082 ft: 15197 corp: 34/538b lim: 25 exec/s: 55 rss: 73Mb L: 19/25 MS: 1 CopyPart- 00:07:06.631 [2024-07-23 10:27:54.935422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.631 [2024-07-23 10:27:54.935450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:54.935508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.631 [2024-07-23 10:27:54.935524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:54.935582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.631 [2024-07-23 10:27:54.935599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.631 #56 NEW cov: 12082 ft: 15207 corp: 35/555b lim: 25 exec/s: 56 rss: 73Mb L: 17/25 MS: 1 CrossOver- 00:07:06.631 [2024-07-23 10:27:54.975642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.631 [2024-07-23 10:27:54.975669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:54.975724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.631 [2024-07-23 10:27:54.975740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:54.975802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.631 [2024-07-23 10:27:54.975818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:54.975873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:06.631 [2024-07-23 10:27:54.975889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.631 #57 NEW cov: 12082 ft: 15217 corp: 36/579b lim: 25 exec/s: 57 rss: 73Mb L: 24/25 MS: 1 ChangeByte- 00:07:06.631 [2024-07-23 10:27:55.015513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.631 [2024-07-23 10:27:55.015541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:55.015600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.631 [2024-07-23 10:27:55.015617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.631 #58 NEW cov: 12082 ft: 15218 corp: 37/591b lim: 25 exec/s: 58 rss: 73Mb L: 12/25 MS: 1 PersAutoDict- DE: "\377\011"- 00:07:06.631 [2024-07-23 10:27:55.065811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.631 [2024-07-23 10:27:55.065839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:55.065888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.631 [2024-07-23 10:27:55.065906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:55.065965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.631 [2024-07-23 10:27:55.065982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.631 #59 NEW cov: 12082 ft: 15260 corp: 38/608b lim: 25 exec/s: 59 rss: 73Mb L: 17/25 MS: 1 EraseBytes- 00:07:06.631 [2024-07-23 10:27:55.106021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:06.631 [2024-07-23 10:27:55.106048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:55.106095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:06.631 [2024-07-23 10:27:55.106112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:55.106184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:06.631 [2024-07-23 10:27:55.106201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.631 [2024-07-23 10:27:55.106259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:06.631 [2024-07-23 10:27:55.106276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.631 #60 NEW cov: 12082 ft: 15261 corp: 39/632b lim: 25 exec/s: 30 rss: 73Mb L: 24/25 MS: 1 ChangeBit- 00:07:06.631 #60 DONE cov: 12082 ft: 15261 corp: 39/632b lim: 25 exec/s: 30 rss: 73Mb 00:07:06.631 ###### Recommended dictionary. ###### 00:07:06.631 "\000\000\000\000\000\000\000\000" # Uses: 2 00:07:06.631 "\377\011" # Uses: 1 00:07:06.631 "\000\005" # Uses: 0 00:07:06.631 "\001\030\232y\366\036\312\370" # Uses: 0 00:07:06.631 ###### End of recommended dictionary. ###### 00:07:06.631 Done 60 runs in 2 second(s) 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.891 10:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:06.891 [2024-07-23 10:27:55.295810] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:06.891 [2024-07-23 10:27:55.295886] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434397 ] 00:07:06.891 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.151 [2024-07-23 10:27:55.478383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.151 [2024-07-23 10:27:55.501281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.151 [2024-07-23 10:27:55.553727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.151 [2024-07-23 10:27:55.570075] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:07.151 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.151 INFO: Seed: 1929872471 00:07:07.151 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:07.151 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:07.151 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:07.151 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.151 #2 INITED exec/s: 0 rss: 64Mb 00:07:07.151 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.151 This may also happen if the target rejected all inputs we tried so far 00:07:07.151 [2024-07-23 10:27:55.647835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.151 [2024-07-23 10:27:55.647880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.151 [2024-07-23 10:27:55.647990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.151 [2024-07-23 10:27:55.648013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.151 [2024-07-23 10:27:55.648124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.151 [2024-07-23 10:27:55.648143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.151 [2024-07-23 10:27:55.648249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.151 [2024-07-23 10:27:55.648271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.669 NEW_FUNC[1/692]: 0x4bf450 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:07.669 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:07.669 #3 NEW cov: 11908 ft: 11905 corp: 2/100b lim: 100 exec/s: 0 rss: 70Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:07:07.669 [2024-07-23 10:27:55.978657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:55.978710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:55.978837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:55.978864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:55.978962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:55.978990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:55.979105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:55.979133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.669 NEW_FUNC[1/1]: 0x1609720 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1513 00:07:07.669 #4 NEW cov: 12040 ft: 12462 corp: 3/198b lim: 100 exec/s: 0 rss: 70Mb L: 98/99 MS: 1 EraseBytes- 00:07:07.669 [2024-07-23 10:27:56.059088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.059125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.059200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.059223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.059287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.059306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.059392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.059409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.059509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.059529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.669 #5 NEW cov: 12046 ft: 12719 corp: 4/298b lim: 100 exec/s: 0 rss: 70Mb L: 100/100 MS: 1 InsertByte- 00:07:07.669 [2024-07-23 10:27:56.109209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.109242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.109326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.109346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.109422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.109439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.109529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.109550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.109645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.109665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.669 #6 NEW cov: 12131 ft: 13016 corp: 5/398b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ChangeByte- 00:07:07.669 [2024-07-23 10:27:56.168816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.168845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.168907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.168925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.669 [2024-07-23 10:27:56.169034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.669 [2024-07-23 10:27:56.169052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.929 #7 NEW cov: 12131 ft: 13415 corp: 6/473b lim: 100 exec/s: 0 rss: 71Mb L: 75/100 MS: 1 EraseBytes- 00:07:07.929 [2024-07-23 10:27:56.219225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.219253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.219322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.219340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.219411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.219430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.219514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.219531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.929 #8 NEW cov: 12131 ft: 13543 corp: 7/571b lim: 100 exec/s: 0 rss: 71Mb L: 98/100 MS: 1 CopyPart- 00:07:07.929 [2024-07-23 10:27:56.279798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3098476543630901248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.279825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.279905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.279923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.280011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.280031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.280125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.280143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.280237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.280253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.929 #9 NEW cov: 12131 ft: 13617 corp: 8/671b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ChangeByte- 00:07:07.929 [2024-07-23 10:27:56.329705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.329732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.329796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.929 [2024-07-23 10:27:56.329822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.929 [2024-07-23 10:27:56.329908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.930 [2024-07-23 10:27:56.329929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.930 [2024-07-23 10:27:56.330029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.930 [2024-07-23 10:27:56.330048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.930 #10 NEW cov: 12131 ft: 13651 corp: 9/759b lim: 100 exec/s: 0 rss: 71Mb L: 88/100 MS: 1 EraseBytes- 00:07:07.930 [2024-07-23 10:27:56.379473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.930 [2024-07-23 10:27:56.379502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.930 [2024-07-23 10:27:56.379567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.930 [2024-07-23 10:27:56.379591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.930 [2024-07-23 10:27:56.379650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.930 [2024-07-23 10:27:56.379668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.930 #11 NEW cov: 12131 ft: 13677 corp: 10/832b lim: 100 exec/s: 0 rss: 71Mb L: 73/100 MS: 1 EraseBytes- 00:07:08.189 [2024-07-23 10:27:56.450447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.189 [2024-07-23 10:27:56.450476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.189 [2024-07-23 10:27:56.450557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.189 [2024-07-23 10:27:56.450575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.189 [2024-07-23 10:27:56.450648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.189 [2024-07-23 10:27:56.450668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.189 [2024-07-23 10:27:56.450752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.189 [2024-07-23 10:27:56.450772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.189 [2024-07-23 10:27:56.450861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:33 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.189 [2024-07-23 10:27:56.450880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.189 #12 NEW cov: 12131 ft: 13745 corp: 11/932b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ChangeBit- 00:07:08.189 [2024-07-23 10:27:56.520814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.189 [2024-07-23 10:27:56.520845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.189 [2024-07-23 10:27:56.520925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.520944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.521009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.521029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.521119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.521141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.521236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.521257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.190 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:08.190 #13 NEW cov: 12154 ft: 13870 corp: 12/1032b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 ChangeByte- 00:07:08.190 [2024-07-23 10:27:56.570321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:289360691285197828 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.570351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.570425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:289360691352306692 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.570443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.570521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:289360691352306692 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.570541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.190 #16 NEW cov: 12154 ft: 13931 corp: 13/1098b lim: 100 exec/s: 0 rss: 72Mb L: 66/100 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:08.190 [2024-07-23 10:27:56.620807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.620837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.620910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.620926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.620991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.621009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.621099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.621119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.190 #17 NEW cov: 12154 ft: 13968 corp: 14/1196b lim: 100 exec/s: 17 rss: 72Mb L: 98/100 MS: 1 ChangeBinInt- 00:07:08.190 [2024-07-23 10:27:56.671300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.671331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.671419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.671437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.671513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.671529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.671614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.671631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.190 [2024-07-23 10:27:56.671729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:548555776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.190 [2024-07-23 10:27:56.671748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.449 #18 NEW cov: 12154 ft: 13981 corp: 15/1296b lim: 100 exec/s: 18 rss: 72Mb L: 100/100 MS: 1 CMP- DE: "\001\000\177\342\310 \262L"- 00:07:08.450 [2024-07-23 10:27:56.721244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.721272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.721349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.721367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.721436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.721456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.721555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.721572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.450 #19 NEW cov: 12154 ft: 14002 corp: 16/1395b lim: 100 exec/s: 19 rss: 72Mb L: 99/100 MS: 1 ChangeByte- 00:07:08.450 [2024-07-23 10:27:56.771734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.771762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.771847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.771866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.771936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.771953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.772046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.772062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.772150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:548555776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.772169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.450 #20 NEW cov: 12154 ft: 14037 corp: 17/1495b lim: 100 exec/s: 20 rss: 72Mb L: 100/100 MS: 1 CrossOver- 00:07:08.450 [2024-07-23 10:27:56.831697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.831724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.831821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:654311424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.831841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.831897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.831917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.832003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.832022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.450 #21 NEW cov: 12154 ft: 14065 corp: 18/1593b lim: 100 exec/s: 21 rss: 72Mb L: 98/100 MS: 1 ChangeByte- 00:07:08.450 [2024-07-23 10:27:56.881294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.881321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.881409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.881428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.450 #22 NEW cov: 12154 ft: 14481 corp: 19/1646b lim: 100 exec/s: 22 rss: 72Mb L: 53/100 MS: 1 InsertRepeatedBytes- 00:07:08.450 [2024-07-23 10:27:56.932359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.932386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.932462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.932479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.932558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.932578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.932672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.932693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.450 [2024-07-23 10:27:56.932783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:33 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.450 [2024-07-23 10:27:56.932813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.709 #23 NEW cov: 12154 ft: 14491 corp: 20/1746b lim: 100 exec/s: 23 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:07:08.709 [2024-07-23 10:27:56.992736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.709 [2024-07-23 10:27:56.992763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.709 [2024-07-23 10:27:56.992875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.709 [2024-07-23 10:27:56.992894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.709 [2024-07-23 10:27:56.992967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.709 [2024-07-23 10:27:56.992986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.709 [2024-07-23 10:27:56.993074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:56.993093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:56.993179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:56.993196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.710 #24 NEW cov: 12154 ft: 14546 corp: 21/1846b lim: 100 exec/s: 24 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:07:08.710 [2024-07-23 10:27:57.052547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.052574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.052675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.052695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.052784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.052804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.052886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.052905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.710 #25 NEW cov: 12154 ft: 14572 corp: 22/1944b lim: 100 exec/s: 25 rss: 72Mb L: 98/100 MS: 1 CopyPart- 00:07:08.710 [2024-07-23 10:27:57.103256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.103286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.103378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.103396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.103497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.103517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.103607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.103623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.103714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.103731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.710 #26 NEW cov: 12154 ft: 14580 corp: 23/2044b lim: 100 exec/s: 26 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:07:08.710 [2024-07-23 10:27:57.163491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3242591731706757120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.163520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.163594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.163615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.163696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.163713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.163813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.163829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.710 [2024-07-23 10:27:57.163920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.710 [2024-07-23 10:27:57.163938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.710 #27 NEW cov: 12154 ft: 14609 corp: 24/2144b lim: 100 exec/s: 27 rss: 72Mb L: 100/100 MS: 1 ChangeBinInt- 00:07:08.969 [2024-07-23 10:27:57.223358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.223386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.223477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8791026472627208192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.223496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.223565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.223581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.223668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.223689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.969 #28 NEW cov: 12154 ft: 14621 corp: 25/2243b lim: 100 exec/s: 28 rss: 72Mb L: 99/100 MS: 1 InsertByte- 00:07:08.969 [2024-07-23 10:27:57.283304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.283333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.283401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.283421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.283502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.283520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.969 #29 NEW cov: 12154 ft: 14635 corp: 26/2316b lim: 100 exec/s: 29 rss: 72Mb L: 73/100 MS: 1 CopyPart- 00:07:08.969 [2024-07-23 10:27:57.344153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.344180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.344276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.344294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.344376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.969 [2024-07-23 10:27:57.344397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.969 [2024-07-23 10:27:57.344487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.344503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.970 [2024-07-23 10:27:57.344591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:33 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.344612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.970 #30 NEW cov: 12154 ft: 14654 corp: 27/2416b lim: 100 exec/s: 30 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:07:08.970 [2024-07-23 10:27:57.403103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.403131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.970 #33 NEW cov: 12154 ft: 15453 corp: 28/2445b lim: 100 exec/s: 33 rss: 73Mb L: 29/100 MS: 3 CrossOver-ChangeBit-CopyPart- 00:07:08.970 [2024-07-23 10:27:57.454645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.454673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.970 [2024-07-23 10:27:57.454757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.454774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.970 [2024-07-23 10:27:57.454870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.454884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.970 [2024-07-23 10:27:57.454970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.454990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.970 [2024-07-23 10:27:57.455076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:33 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.970 [2024-07-23 10:27:57.455094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:09.229 #34 NEW cov: 12154 ft: 15485 corp: 29/2545b lim: 100 exec/s: 34 rss: 73Mb L: 100/100 MS: 1 ChangeBit- 00:07:09.229 [2024-07-23 10:27:57.514719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.229 [2024-07-23 10:27:57.514747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.229 [2024-07-23 10:27:57.514832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.229 [2024-07-23 10:27:57.514851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.229 [2024-07-23 10:27:57.514916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.229 [2024-07-23 10:27:57.514935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.229 [2024-07-23 10:27:57.515026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.229 [2024-07-23 10:27:57.515046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.230 [2024-07-23 10:27:57.515143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.230 [2024-07-23 10:27:57.515162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:09.230 #35 NEW cov: 12154 ft: 15491 corp: 30/2645b lim: 100 exec/s: 35 rss: 73Mb L: 100/100 MS: 1 ChangeBit- 00:07:09.230 [2024-07-23 10:27:57.584101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.230 [2024-07-23 10:27:57.584131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.230 [2024-07-23 10:27:57.584210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.230 [2024-07-23 10:27:57.584225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.230 #36 NEW cov: 12154 ft: 15499 corp: 31/2698b lim: 100 exec/s: 18 rss: 73Mb L: 53/100 MS: 1 ShuffleBytes- 00:07:09.230 #36 DONE cov: 12154 ft: 15499 corp: 31/2698b lim: 100 exec/s: 18 rss: 73Mb 00:07:09.230 ###### Recommended dictionary. ###### 00:07:09.230 "\001\000\177\342\310 \262L" # Uses: 0 00:07:09.230 ###### End of recommended dictionary. ###### 00:07:09.230 Done 36 runs in 2 second(s) 00:07:09.490 10:27:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.490 10:27:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.490 10:27:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.490 10:27:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:09.490 00:07:09.490 real 1m5.731s 00:07:09.490 user 1m39.402s 00:07:09.490 sys 0m9.718s 00:07:09.490 10:27:57 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.490 10:27:57 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:09.491 ************************************ 00:07:09.491 END TEST nvmf_fuzz 00:07:09.491 ************************************ 00:07:09.491 10:27:57 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:09.491 10:27:57 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:09.491 10:27:57 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:09.491 10:27:57 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.491 10:27:57 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.491 10:27:57 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:09.491 ************************************ 00:07:09.491 START TEST vfio_fuzz 00:07:09.491 ************************************ 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:09.491 * Looking for test storage... 00:07:09.491 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:09.491 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:09.492 #define SPDK_CONFIG_H 00:07:09.492 #define SPDK_CONFIG_APPS 1 00:07:09.492 #define SPDK_CONFIG_ARCH native 00:07:09.492 #undef SPDK_CONFIG_ASAN 00:07:09.492 #undef SPDK_CONFIG_AVAHI 00:07:09.492 #undef SPDK_CONFIG_CET 00:07:09.492 #define SPDK_CONFIG_COVERAGE 1 00:07:09.492 #define SPDK_CONFIG_CROSS_PREFIX 00:07:09.492 #undef SPDK_CONFIG_CRYPTO 00:07:09.492 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:09.492 #undef SPDK_CONFIG_CUSTOMOCF 00:07:09.492 #undef SPDK_CONFIG_DAOS 00:07:09.492 #define SPDK_CONFIG_DAOS_DIR 00:07:09.492 #define SPDK_CONFIG_DEBUG 1 00:07:09.492 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:09.492 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:09.492 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:09.492 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:09.492 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:09.492 #undef SPDK_CONFIG_DPDK_UADK 00:07:09.492 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:09.492 #define SPDK_CONFIG_EXAMPLES 1 00:07:09.492 #undef SPDK_CONFIG_FC 00:07:09.492 #define SPDK_CONFIG_FC_PATH 00:07:09.492 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:09.492 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:09.492 #undef SPDK_CONFIG_FUSE 00:07:09.492 #define SPDK_CONFIG_FUZZER 1 00:07:09.492 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:09.492 #undef SPDK_CONFIG_GOLANG 00:07:09.492 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:09.492 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:09.492 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:09.492 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:09.492 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:09.492 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:09.492 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:09.492 #define SPDK_CONFIG_IDXD 1 00:07:09.492 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:09.492 #undef SPDK_CONFIG_IPSEC_MB 00:07:09.492 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:09.492 #define SPDK_CONFIG_ISAL 1 00:07:09.492 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:09.492 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:09.492 #define SPDK_CONFIG_LIBDIR 00:07:09.492 #undef SPDK_CONFIG_LTO 00:07:09.492 #define SPDK_CONFIG_MAX_LCORES 00:07:09.492 #define SPDK_CONFIG_NVME_CUSE 1 00:07:09.492 #undef SPDK_CONFIG_OCF 00:07:09.492 #define SPDK_CONFIG_OCF_PATH 00:07:09.492 #define SPDK_CONFIG_OPENSSL_PATH 00:07:09.492 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:09.492 #define SPDK_CONFIG_PGO_DIR 00:07:09.492 #undef SPDK_CONFIG_PGO_USE 00:07:09.492 #define SPDK_CONFIG_PREFIX /usr/local 00:07:09.492 #undef SPDK_CONFIG_RAID5F 00:07:09.492 #undef SPDK_CONFIG_RBD 00:07:09.492 #define SPDK_CONFIG_RDMA 1 00:07:09.492 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:09.492 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:09.492 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:09.492 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:09.492 #undef SPDK_CONFIG_SHARED 00:07:09.492 #undef SPDK_CONFIG_SMA 00:07:09.492 #define SPDK_CONFIG_TESTS 1 00:07:09.492 #undef SPDK_CONFIG_TSAN 00:07:09.492 #define SPDK_CONFIG_UBLK 1 00:07:09.492 #define SPDK_CONFIG_UBSAN 1 00:07:09.492 #undef SPDK_CONFIG_UNIT_TESTS 00:07:09.492 #undef SPDK_CONFIG_URING 00:07:09.492 #define SPDK_CONFIG_URING_PATH 00:07:09.492 #undef SPDK_CONFIG_URING_ZNS 00:07:09.492 #undef SPDK_CONFIG_USDT 00:07:09.492 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:09.492 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:09.492 #define SPDK_CONFIG_VFIO_USER 1 00:07:09.492 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:09.492 #define SPDK_CONFIG_VHOST 1 00:07:09.492 #define SPDK_CONFIG_VIRTIO 1 00:07:09.492 #undef SPDK_CONFIG_VTUNE 00:07:09.492 #define SPDK_CONFIG_VTUNE_DIR 00:07:09.492 #define SPDK_CONFIG_WERROR 1 00:07:09.492 #define SPDK_CONFIG_WPDK_DIR 00:07:09.492 #undef SPDK_CONFIG_XNVME 00:07:09.492 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:09.492 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@57 -- # : 1 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@61 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # : 1 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # : 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # : 1 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # : 1 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # : 1 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # : true 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # : 0 00:07:09.493 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # : 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@166 -- # : 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # : 0 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:09.494 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # cat 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:07:09.755 10:27:57 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j72 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3434754 ]] 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # kill -0 3434754 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:09.755 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.d5WmjH 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.d5WmjH/tests/vfio /tmp/spdk.d5WmjH 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # df -T 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=945618944 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4338810880 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=48906452992 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742542848 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=12836089856 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866558976 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871269376 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342702080 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348510208 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5808128 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870736896 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871273472 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=536576 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174248960 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174253056 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:09.756 * Looking for test storage... 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@373 -- # target_space=48906452992 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # new_size=15050682368 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:09.756 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # return 0 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:09.756 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:09.757 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:09.757 10:27:58 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:09.757 [2024-07-23 10:27:58.109289] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:09.757 [2024-07-23 10:27:58.109380] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434892 ] 00:07:09.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.757 [2024-07-23 10:27:58.185821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.757 [2024-07-23 10:27:58.230245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.017 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.017 INFO: Seed: 472908280 00:07:10.017 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:07:10.017 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:07:10.017 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:10.017 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.017 #2 INITED exec/s: 0 rss: 66Mb 00:07:10.017 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:10.017 This may also happen if the target rejected all inputs we tried so far 00:07:10.017 [2024-07-23 10:27:58.478679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:10.535 NEW_FUNC[1/655]: 0x4933d0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:10.535 NEW_FUNC[2/655]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:10.535 #26 NEW cov: 10918 ft: 10560 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 4 InsertByte-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:10.794 NEW_FUNC[1/1]: 0x1318a30 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:07:10.794 #32 NEW cov: 10939 ft: 13775 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:11.053 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:11.053 #40 NEW cov: 10959 ft: 14556 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 3 ChangeBinInt-CMP-InsertByte- DE: "\372\000\000\000"- 00:07:11.312 #41 NEW cov: 10959 ft: 15050 corp: 5/25b lim: 6 exec/s: 41 rss: 75Mb L: 6/6 MS: 1 CrossOver- 00:07:11.312 #42 NEW cov: 10959 ft: 16208 corp: 6/31b lim: 6 exec/s: 42 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:11.571 #53 NEW cov: 10959 ft: 16517 corp: 7/37b lim: 6 exec/s: 53 rss: 75Mb L: 6/6 MS: 1 CrossOver- 00:07:11.830 #54 NEW cov: 10959 ft: 17040 corp: 8/43b lim: 6 exec/s: 54 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:11.830 #55 NEW cov: 10966 ft: 17487 corp: 9/49b lim: 6 exec/s: 55 rss: 75Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:12.089 #56 NEW cov: 10966 ft: 17832 corp: 10/55b lim: 6 exec/s: 28 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:07:12.089 #56 DONE cov: 10966 ft: 17832 corp: 10/55b lim: 6 exec/s: 28 rss: 75Mb 00:07:12.089 ###### Recommended dictionary. ###### 00:07:12.089 "\372\000\000\000" # Uses: 1 00:07:12.089 ###### End of recommended dictionary. ###### 00:07:12.089 Done 56 runs in 2 second(s) 00:07:12.089 [2024-07-23 10:28:00.541012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:12.358 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:12.358 10:28:00 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:12.358 [2024-07-23 10:28:00.853026] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:12.358 [2024-07-23 10:28:00.853124] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435261 ] 00:07:12.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.620 [2024-07-23 10:28:00.928385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.620 [2024-07-23 10:28:00.975049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.880 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.880 INFO: Seed: 3215906115 00:07:12.880 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:07:12.880 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:07:12.880 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:12.880 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.880 #2 INITED exec/s: 0 rss: 67Mb 00:07:12.880 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.880 This may also happen if the target rejected all inputs we tried so far 00:07:12.880 [2024-07-23 10:28:01.222003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:12.880 [2024-07-23 10:28:01.274841] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:12.880 [2024-07-23 10:28:01.274868] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:12.880 [2024-07-23 10:28:01.274886] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:13.398 NEW_FUNC[1/653]: 0x493970 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:13.398 NEW_FUNC[2/653]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:13.398 #25 NEW cov: 10848 ft: 10882 corp: 2/5b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 3 CrossOver-CopyPart-CrossOver- 00:07:13.398 [2024-07-23 10:28:01.744785] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:13.398 [2024-07-23 10:28:01.744827] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:13.398 [2024-07-23 10:28:01.744845] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:13.398 NEW_FUNC[1/5]: 0x142e810 in _nvmf_vfio_user_req_free /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:5305 00:07:13.398 NEW_FUNC[2/5]: 0x1a49d10 in reactor_post_process_lw_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:864 00:07:13.398 #26 NEW cov: 10935 ft: 13984 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:13.657 [2024-07-23 10:28:01.928449] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:13.657 [2024-07-23 10:28:01.928477] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:13.657 [2024-07-23 10:28:01.928494] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:13.657 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:13.657 #27 NEW cov: 10955 ft: 15646 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:13.657 [2024-07-23 10:28:02.106175] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:13.657 [2024-07-23 10:28:02.106201] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:13.657 [2024-07-23 10:28:02.106219] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:13.917 #28 NEW cov: 10955 ft: 15688 corp: 5/17b lim: 4 exec/s: 28 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:13.917 [2024-07-23 10:28:02.280589] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:13.917 [2024-07-23 10:28:02.280614] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:13.917 [2024-07-23 10:28:02.280632] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:13.917 #29 NEW cov: 10955 ft: 16128 corp: 6/21b lim: 4 exec/s: 29 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:14.176 [2024-07-23 10:28:02.452814] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:14.176 [2024-07-23 10:28:02.452838] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:14.176 [2024-07-23 10:28:02.452855] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:14.176 #30 NEW cov: 10955 ft: 16210 corp: 7/25b lim: 4 exec/s: 30 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:07:14.176 [2024-07-23 10:28:02.624132] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:14.176 [2024-07-23 10:28:02.624156] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:14.176 [2024-07-23 10:28:02.624189] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:14.434 #31 NEW cov: 10955 ft: 16279 corp: 8/29b lim: 4 exec/s: 31 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:14.434 [2024-07-23 10:28:02.797680] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:14.434 [2024-07-23 10:28:02.797703] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:14.434 [2024-07-23 10:28:02.797721] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:14.434 #39 NEW cov: 10955 ft: 16486 corp: 9/33b lim: 4 exec/s: 39 rss: 75Mb L: 4/4 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:07:14.693 [2024-07-23 10:28:02.978843] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:14.693 [2024-07-23 10:28:02.978867] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:14.693 [2024-07-23 10:28:02.978884] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:14.693 #40 NEW cov: 10962 ft: 16821 corp: 10/37b lim: 4 exec/s: 40 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:14.693 [2024-07-23 10:28:03.161104] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:14.693 [2024-07-23 10:28:03.161127] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:14.693 [2024-07-23 10:28:03.161145] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:14.953 #46 NEW cov: 10962 ft: 17200 corp: 11/41b lim: 4 exec/s: 23 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:07:14.953 #46 DONE cov: 10962 ft: 17200 corp: 11/41b lim: 4 exec/s: 23 rss: 75Mb 00:07:14.953 Done 46 runs in 2 second(s) 00:07:14.953 [2024-07-23 10:28:03.286999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:15.213 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:15.213 10:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:15.213 [2024-07-23 10:28:03.596309] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:15.213 [2024-07-23 10:28:03.596394] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435612 ] 00:07:15.213 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.213 [2024-07-23 10:28:03.672965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.472 [2024-07-23 10:28:03.717634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.472 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.472 INFO: Seed: 1662945019 00:07:15.472 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:07:15.472 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:07:15.472 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:15.472 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.472 #2 INITED exec/s: 0 rss: 66Mb 00:07:15.472 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.472 This may also happen if the target rejected all inputs we tried so far 00:07:15.472 [2024-07-23 10:28:03.964588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:15.731 [2024-07-23 10:28:04.010687] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:15.991 NEW_FUNC[1/656]: 0x494350 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:15.991 NEW_FUNC[2/656]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:15.991 #72 NEW cov: 10901 ft: 10825 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:07:15.991 [2024-07-23 10:28:04.479563] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:16.251 NEW_FUNC[1/1]: 0x499a70 in io_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:393 00:07:16.251 #88 NEW cov: 10921 ft: 14319 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:16.251 [2024-07-23 10:28:04.649786] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:16.509 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:16.509 #95 NEW cov: 10938 ft: 15073 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 InsertRepeatedBytes-CopyPart- 00:07:16.509 [2024-07-23 10:28:04.832442] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:16.509 #96 NEW cov: 10938 ft: 15845 corp: 5/33b lim: 8 exec/s: 96 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:16.509 [2024-07-23 10:28:05.000366] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:16.767 #97 NEW cov: 10938 ft: 15925 corp: 6/41b lim: 8 exec/s: 97 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:07:16.767 [2024-07-23 10:28:05.172121] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:17.025 #98 NEW cov: 10938 ft: 16352 corp: 7/49b lim: 8 exec/s: 98 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:17.026 [2024-07-23 10:28:05.346001] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:17.026 #99 NEW cov: 10938 ft: 16920 corp: 8/57b lim: 8 exec/s: 99 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:07:17.026 [2024-07-23 10:28:05.523886] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:17.308 #102 NEW cov: 10938 ft: 17109 corp: 9/65b lim: 8 exec/s: 102 rss: 75Mb L: 8/8 MS: 3 EraseBytes-EraseBytes-CopyPart- 00:07:17.308 [2024-07-23 10:28:05.694662] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:17.638 #103 NEW cov: 10945 ft: 17933 corp: 10/73b lim: 8 exec/s: 103 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:17.638 [2024-07-23 10:28:05.879801] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:17.638 #104 NEW cov: 10945 ft: 18509 corp: 11/81b lim: 8 exec/s: 52 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:17.638 #104 DONE cov: 10945 ft: 18509 corp: 11/81b lim: 8 exec/s: 52 rss: 75Mb 00:07:17.638 Done 104 runs in 2 second(s) 00:07:17.638 [2024-07-23 10:28:06.004007] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:17.907 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:17.907 10:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:17.907 [2024-07-23 10:28:06.306276] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:17.907 [2024-07-23 10:28:06.306351] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435970 ] 00:07:17.907 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.907 [2024-07-23 10:28:06.378918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.166 [2024-07-23 10:28:06.422859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.166 INFO: Running with entropic power schedule (0xFF, 100). 00:07:18.166 INFO: Seed: 67967431 00:07:18.166 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:07:18.166 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:07:18.166 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:18.166 INFO: A corpus is not provided, starting from an empty corpus 00:07:18.166 #2 INITED exec/s: 0 rss: 67Mb 00:07:18.166 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:18.166 This may also happen if the target rejected all inputs we tried so far 00:07:18.166 [2024-07-23 10:28:06.665346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:18.685 NEW_FUNC[1/657]: 0x494a30 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:18.685 NEW_FUNC[2/657]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:18.685 #76 NEW cov: 10912 ft: 10540 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 4 InsertRepeatedBytes-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:07:18.944 #77 NEW cov: 10926 ft: 14243 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:18.944 #88 NEW cov: 10929 ft: 15236 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:19.204 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:19.204 #89 NEW cov: 10946 ft: 15419 corp: 5/129b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:19.463 #90 NEW cov: 10946 ft: 15642 corp: 6/161b lim: 32 exec/s: 90 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:19.463 #96 NEW cov: 10946 ft: 15918 corp: 7/193b lim: 32 exec/s: 96 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:19.722 #97 NEW cov: 10946 ft: 16141 corp: 8/225b lim: 32 exec/s: 97 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:19.982 #98 NEW cov: 10946 ft: 16199 corp: 9/257b lim: 32 exec/s: 98 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:19.982 #99 NEW cov: 10953 ft: 16242 corp: 10/289b lim: 32 exec/s: 99 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:20.241 #105 NEW cov: 10953 ft: 16266 corp: 11/321b lim: 32 exec/s: 105 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:20.500 #106 NEW cov: 10953 ft: 16289 corp: 12/353b lim: 32 exec/s: 53 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:07:20.500 #106 DONE cov: 10953 ft: 16289 corp: 12/353b lim: 32 exec/s: 53 rss: 75Mb 00:07:20.500 Done 106 runs in 2 second(s) 00:07:20.501 [2024-07-23 10:28:08.805007] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:20.760 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:20.761 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:20.761 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:20.761 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:20.761 10:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:20.761 [2024-07-23 10:28:09.089217] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:20.761 [2024-07-23 10:28:09.089288] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436373 ] 00:07:20.761 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.761 [2024-07-23 10:28:09.161234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.761 [2024-07-23 10:28:09.204538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.021 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.021 INFO: Seed: 2855975761 00:07:21.021 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:07:21.021 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:07:21.021 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:21.021 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.021 #2 INITED exec/s: 0 rss: 65Mb 00:07:21.021 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.021 This may also happen if the target rejected all inputs we tried so far 00:07:21.021 [2024-07-23 10:28:09.452723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:21.540 NEW_FUNC[1/656]: 0x4952b0 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:21.540 NEW_FUNC[2/656]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:21.540 #103 NEW cov: 10911 ft: 10791 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:21.799 NEW_FUNC[1/1]: 0x1410750 in sq_headp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:565 00:07:21.799 #109 NEW cov: 10927 ft: 13337 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:07:21.799 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:21.799 #110 NEW cov: 10944 ft: 14570 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:22.058 #111 NEW cov: 10944 ft: 15335 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:22.317 #112 NEW cov: 10944 ft: 15518 corp: 6/161b lim: 32 exec/s: 112 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:22.317 #113 NEW cov: 10944 ft: 15810 corp: 7/193b lim: 32 exec/s: 113 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:22.577 #114 NEW cov: 10944 ft: 16271 corp: 8/225b lim: 32 exec/s: 114 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:22.577 #115 NEW cov: 10944 ft: 16839 corp: 9/257b lim: 32 exec/s: 115 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:22.836 #116 NEW cov: 10951 ft: 16912 corp: 10/289b lim: 32 exec/s: 116 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:23.095 #132 NEW cov: 10951 ft: 17208 corp: 11/321b lim: 32 exec/s: 132 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\201\000\000\000"- 00:07:23.095 #138 NEW cov: 10951 ft: 17295 corp: 12/353b lim: 32 exec/s: 69 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:23.095 #138 DONE cov: 10951 ft: 17295 corp: 12/353b lim: 32 exec/s: 69 rss: 74Mb 00:07:23.095 ###### Recommended dictionary. ###### 00:07:23.095 "\201\000\000\000" # Uses: 0 00:07:23.095 ###### End of recommended dictionary. ###### 00:07:23.095 Done 138 runs in 2 second(s) 00:07:23.095 [2024-07-23 10:28:11.561976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:23.354 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.354 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:23.355 10:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:23.355 [2024-07-23 10:28:11.850263] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:23.355 [2024-07-23 10:28:11.850325] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436749 ] 00:07:23.613 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.613 [2024-07-23 10:28:11.920890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.613 [2024-07-23 10:28:11.965028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.873 INFO: Running with entropic power schedule (0xFF, 100). 00:07:23.873 INFO: Seed: 1321016018 00:07:23.873 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:07:23.873 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:07:23.873 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:23.873 INFO: A corpus is not provided, starting from an empty corpus 00:07:23.873 #2 INITED exec/s: 0 rss: 67Mb 00:07:23.873 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:23.873 This may also happen if the target rejected all inputs we tried so far 00:07:23.873 [2024-07-23 10:28:12.211644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:23.873 [2024-07-23 10:28:12.252809] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.873 [2024-07-23 10:28:12.252847] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.392 NEW_FUNC[1/657]: 0x495cb0 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:24.392 NEW_FUNC[2/657]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:24.392 #6 NEW cov: 10912 ft: 10871 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 CMP-ChangeBit-ChangeBinInt-InsertRepeatedBytes- DE: "\001\000\000\003"- 00:07:24.392 [2024-07-23 10:28:12.738192] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.392 [2024-07-23 10:28:12.738241] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.392 NEW_FUNC[1/1]: 0x1dbfb90 in timed_poller_compare /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:320 00:07:24.392 #27 NEW cov: 10940 ft: 13568 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:24.651 [2024-07-23 10:28:12.926831] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.651 [2024-07-23 10:28:12.926876] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.651 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:24.651 #28 NEW cov: 10957 ft: 15547 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:24.651 [2024-07-23 10:28:13.117956] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.651 [2024-07-23 10:28:13.117991] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.910 #29 NEW cov: 10957 ft: 16013 corp: 5/53b lim: 13 exec/s: 29 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:07:24.910 [2024-07-23 10:28:13.296775] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:24.910 [2024-07-23 10:28:13.296820] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:24.910 #40 NEW cov: 10957 ft: 16997 corp: 6/66b lim: 13 exec/s: 40 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:25.168 [2024-07-23 10:28:13.483670] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.168 [2024-07-23 10:28:13.483703] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.168 #41 NEW cov: 10957 ft: 17117 corp: 7/79b lim: 13 exec/s: 41 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:25.168 [2024-07-23 10:28:13.665459] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.168 [2024-07-23 10:28:13.665491] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.427 #42 NEW cov: 10957 ft: 17324 corp: 8/92b lim: 13 exec/s: 42 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:25.427 [2024-07-23 10:28:13.843991] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.427 [2024-07-23 10:28:13.844024] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.686 #48 NEW cov: 10957 ft: 17439 corp: 9/105b lim: 13 exec/s: 48 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:25.686 [2024-07-23 10:28:14.021750] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.686 [2024-07-23 10:28:14.021782] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.686 #49 NEW cov: 10964 ft: 17615 corp: 10/118b lim: 13 exec/s: 49 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:07:25.945 [2024-07-23 10:28:14.202430] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:25.945 [2024-07-23 10:28:14.202463] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:25.945 #50 NEW cov: 10964 ft: 17666 corp: 11/131b lim: 13 exec/s: 25 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:07:25.945 #50 DONE cov: 10964 ft: 17666 corp: 11/131b lim: 13 exec/s: 25 rss: 76Mb 00:07:25.945 ###### Recommended dictionary. ###### 00:07:25.945 "\001\000\000\003" # Uses: 2 00:07:25.945 ###### End of recommended dictionary. ###### 00:07:25.945 Done 50 runs in 2 second(s) 00:07:25.945 [2024-07-23 10:28:14.329999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:26.204 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:26.204 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.205 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:26.205 10:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:26.205 [2024-07-23 10:28:14.633528] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:26.205 [2024-07-23 10:28:14.633606] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437116 ] 00:07:26.205 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.464 [2024-07-23 10:28:14.705494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.464 [2024-07-23 10:28:14.749686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.464 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.464 INFO: Seed: 4106009147 00:07:26.464 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:07:26.464 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:07:26.464 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:26.464 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.464 #2 INITED exec/s: 0 rss: 67Mb 00:07:26.464 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:26.464 This may also happen if the target rejected all inputs we tried so far 00:07:26.723 [2024-07-23 10:28:14.997273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:26.723 [2024-07-23 10:28:15.048817] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:26.723 [2024-07-23 10:28:15.048854] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:26.982 NEW_FUNC[1/658]: 0x4969a0 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:26.982 NEW_FUNC[2/658]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:26.982 #3 NEW cov: 10918 ft: 10842 corp: 2/10b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:27.241 [2024-07-23 10:28:15.524642] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:27.241 [2024-07-23 10:28:15.524688] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:27.241 #39 NEW cov: 10932 ft: 13864 corp: 3/19b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:27.241 [2024-07-23 10:28:15.701117] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:27.241 [2024-07-23 10:28:15.701153] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:27.499 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:27.499 #40 NEW cov: 10949 ft: 15363 corp: 4/28b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:27.499 [2024-07-23 10:28:15.887601] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:27.499 [2024-07-23 10:28:15.887634] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:27.499 #41 NEW cov: 10949 ft: 15671 corp: 5/37b lim: 9 exec/s: 41 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:27.757 [2024-07-23 10:28:16.062968] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:27.757 [2024-07-23 10:28:16.063000] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:27.757 #42 NEW cov: 10949 ft: 16017 corp: 6/46b lim: 9 exec/s: 42 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:27.757 [2024-07-23 10:28:16.237921] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:27.757 [2024-07-23 10:28:16.237952] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.015 #48 NEW cov: 10949 ft: 16354 corp: 7/55b lim: 9 exec/s: 48 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:28.015 [2024-07-23 10:28:16.409370] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.015 [2024-07-23 10:28:16.409402] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.015 #49 NEW cov: 10949 ft: 17253 corp: 8/64b lim: 9 exec/s: 49 rss: 75Mb L: 9/9 MS: 1 CrossOver- 00:07:28.274 [2024-07-23 10:28:16.579271] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.274 [2024-07-23 10:28:16.579301] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.274 #50 NEW cov: 10949 ft: 17414 corp: 9/73b lim: 9 exec/s: 50 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:28.274 [2024-07-23 10:28:16.752393] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.274 [2024-07-23 10:28:16.752424] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.532 #51 NEW cov: 10956 ft: 17860 corp: 10/82b lim: 9 exec/s: 51 rss: 75Mb L: 9/9 MS: 1 CMP- DE: "\010\000\000\000"- 00:07:28.532 [2024-07-23 10:28:16.939641] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.533 [2024-07-23 10:28:16.939676] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.791 #52 NEW cov: 10956 ft: 18061 corp: 11/91b lim: 9 exec/s: 26 rss: 75Mb L: 9/9 MS: 1 CrossOver- 00:07:28.791 #52 DONE cov: 10956 ft: 18061 corp: 11/91b lim: 9 exec/s: 26 rss: 75Mb 00:07:28.791 ###### Recommended dictionary. ###### 00:07:28.791 "\010\000\000\000" # Uses: 0 00:07:28.791 ###### End of recommended dictionary. ###### 00:07:28.791 Done 52 runs in 2 second(s) 00:07:28.791 [2024-07-23 10:28:17.060985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:29.051 10:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:29.051 10:28:17 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.051 10:28:17 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.051 10:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:29.051 00:07:29.051 real 0m19.519s 00:07:29.051 user 0m27.189s 00:07:29.051 sys 0m1.945s 00:07:29.051 10:28:17 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.051 10:28:17 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.051 ************************************ 00:07:29.051 END TEST vfio_fuzz 00:07:29.051 ************************************ 00:07:29.051 10:28:17 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:29.051 00:07:29.051 real 1m25.497s 00:07:29.051 user 2m6.687s 00:07:29.051 sys 0m11.835s 00:07:29.051 10:28:17 llvm_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.051 10:28:17 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.051 ************************************ 00:07:29.051 END TEST llvm_fuzz 00:07:29.051 ************************************ 00:07:29.051 10:28:17 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:07:29.051 10:28:17 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:07:29.051 10:28:17 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:07:29.051 10:28:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.051 10:28:17 -- common/autotest_common.sh@10 -- # set +x 00:07:29.051 10:28:17 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:07:29.051 10:28:17 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:07:29.051 10:28:17 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:07:29.051 10:28:17 -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 INFO: APP EXITING 00:07:33.241 INFO: killing all VMs 00:07:33.241 INFO: killing vhost app 00:07:33.241 WARN: no vhost pid file found 00:07:33.241 INFO: EXIT DONE 00:07:36.534 Waiting for block devices as requested 00:07:36.534 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:07:36.534 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:07:36.534 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:36.534 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:36.534 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:36.793 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:36.793 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:36.793 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:37.051 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:37.051 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:37.051 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:07:37.310 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:37.310 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:37.310 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:37.569 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:37.569 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:37.569 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:37.828 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:37.828 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:42.022 Cleaning 00:07:42.022 Removing: /dev/shm/spdk_tgt_trace.pid3410160 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3409659 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3410160 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3410613 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3411334 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3411466 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3412251 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3412262 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3412582 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3412811 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3413049 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3413295 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3413519 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3413662 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3413803 00:07:42.022 Removing: /var/run/dpdk/spdk_pid3414026 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3414764 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3417003 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3417237 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3417432 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3417582 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3418000 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3418013 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3418417 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3418558 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3418805 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3418810 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3419023 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3419033 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3419494 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3419693 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3419878 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3419966 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3420180 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3420205 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3420419 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3420632 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3420842 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3421045 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3421243 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3421448 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3421648 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3421847 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3422054 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3422257 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3422454 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3422657 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3422858 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3423062 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3423266 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3423464 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3423672 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3423872 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3424076 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3424287 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3424488 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3424592 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3424805 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3425318 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3425630 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3425938 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3426315 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3426796 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3427445 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3427941 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3428313 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3428681 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3429056 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3429424 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3429797 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3430170 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3430472 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3430791 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3431117 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3431484 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3431856 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3432228 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3432598 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3432969 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3433342 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3433708 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3434079 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3434397 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3434892 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3435261 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3435612 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3435970 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3436373 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3436749 00:07:42.023 Removing: /var/run/dpdk/spdk_pid3437116 00:07:42.023 Clean 00:07:42.023 10:28:30 -- common/autotest_common.sh@1447 -- # return 0 00:07:42.023 10:28:30 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:07:42.023 10:28:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.023 10:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:42.023 10:28:30 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:07:42.023 10:28:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.023 10:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:42.023 10:28:30 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:42.023 10:28:30 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:42.023 10:28:30 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:42.023 10:28:30 -- spdk/autotest.sh@391 -- # hash lcov 00:07:42.023 10:28:30 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:07:42.023 10:28:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:42.023 10:28:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:42.023 10:28:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.023 10:28:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.023 10:28:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.023 10:28:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.023 10:28:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.023 10:28:30 -- paths/export.sh@5 -- $ export PATH 00:07:42.023 10:28:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.023 10:28:30 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:42.023 10:28:30 -- common/autobuild_common.sh@440 -- $ date +%s 00:07:42.023 10:28:30 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721723310.XXXXXX 00:07:42.023 10:28:30 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721723310.cQn17R 00:07:42.023 10:28:30 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:07:42.023 10:28:30 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:07:42.023 10:28:30 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:42.023 10:28:30 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:07:42.023 10:28:30 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:42.023 10:28:30 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:42.023 10:28:30 -- common/autobuild_common.sh@456 -- $ get_config_params 00:07:42.023 10:28:30 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:07:42.023 10:28:30 -- common/autotest_common.sh@10 -- $ set +x 00:07:42.023 10:28:30 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:07:42.023 10:28:30 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:07:42.023 10:28:30 -- pm/common@17 -- $ local monitor 00:07:42.023 10:28:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:42.023 10:28:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:42.023 10:28:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:42.023 10:28:30 -- pm/common@21 -- $ date +%s 00:07:42.023 10:28:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:42.023 10:28:30 -- pm/common@21 -- $ date +%s 00:07:42.023 10:28:30 -- pm/common@25 -- $ sleep 1 00:07:42.282 10:28:30 -- pm/common@21 -- $ date +%s 00:07:42.282 10:28:30 -- pm/common@21 -- $ date +%s 00:07:42.282 10:28:30 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721723310 00:07:42.282 10:28:30 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721723310 00:07:42.282 10:28:30 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721723310 00:07:42.282 10:28:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721723310 00:07:42.282 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721723310_collect-vmstat.pm.log 00:07:42.282 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721723310_collect-cpu-load.pm.log 00:07:42.282 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721723310_collect-cpu-temp.pm.log 00:07:42.282 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721723310_collect-bmc-pm.bmc.pm.log 00:07:43.218 10:28:31 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:07:43.218 10:28:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:07:43.218 10:28:31 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:43.218 10:28:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:07:43.218 10:28:31 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:07:43.218 10:28:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:07:43.218 10:28:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:07:43.218 10:28:31 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:07:43.218 10:28:31 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:07:43.218 10:28:31 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:43.218 10:28:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:07:43.218 10:28:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:07:43.218 10:28:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:43.218 10:28:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:43.218 10:28:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:43.218 10:28:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:43.218 10:28:31 -- pm/common@44 -- $ pid=3442779 00:07:43.218 10:28:31 -- pm/common@50 -- $ kill -TERM 3442779 00:07:43.218 10:28:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:43.218 10:28:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:43.218 10:28:31 -- pm/common@44 -- $ pid=3442781 00:07:43.218 10:28:31 -- pm/common@50 -- $ kill -TERM 3442781 00:07:43.218 10:28:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:43.218 10:28:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:43.218 10:28:31 -- pm/common@44 -- $ pid=3442783 00:07:43.218 10:28:31 -- pm/common@50 -- $ kill -TERM 3442783 00:07:43.218 10:28:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:43.218 10:28:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:43.218 10:28:31 -- pm/common@44 -- $ pid=3442818 00:07:43.218 10:28:31 -- pm/common@50 -- $ sudo -E kill -TERM 3442818 00:07:43.218 + [[ -n 3298063 ]] 00:07:43.218 + sudo kill 3298063 00:07:43.228 [Pipeline] } 00:07:43.247 [Pipeline] // stage 00:07:43.253 [Pipeline] } 00:07:43.272 [Pipeline] // timeout 00:07:43.277 [Pipeline] } 00:07:43.296 [Pipeline] // catchError 00:07:43.301 [Pipeline] } 00:07:43.317 [Pipeline] // wrap 00:07:43.323 [Pipeline] } 00:07:43.341 [Pipeline] // catchError 00:07:43.352 [Pipeline] stage 00:07:43.354 [Pipeline] { (Epilogue) 00:07:43.366 [Pipeline] catchError 00:07:43.368 [Pipeline] { 00:07:43.381 [Pipeline] echo 00:07:43.382 Cleanup processes 00:07:43.387 [Pipeline] sh 00:07:43.669 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:43.670 3442958 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:07:43.670 3443608 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:43.684 [Pipeline] sh 00:07:43.967 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:43.967 ++ grep -v 'sudo pgrep' 00:07:43.967 ++ awk '{print $1}' 00:07:43.967 + sudo kill -9 3442958 00:07:43.979 [Pipeline] sh 00:07:44.263 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:07:45.211 [Pipeline] sh 00:07:45.494 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:07:45.494 Artifacts sizes are good 00:07:45.511 [Pipeline] archiveArtifacts 00:07:45.548 Archiving artifacts 00:07:45.613 [Pipeline] sh 00:07:45.952 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:45.969 [Pipeline] cleanWs 00:07:45.980 [WS-CLEANUP] Deleting project workspace... 00:07:45.980 [WS-CLEANUP] Deferred wipeout is used... 00:07:45.987 [WS-CLEANUP] done 00:07:45.989 [Pipeline] } 00:07:46.010 [Pipeline] // catchError 00:07:46.022 [Pipeline] sh 00:07:46.304 + logger -p user.info -t JENKINS-CI 00:07:46.313 [Pipeline] } 00:07:46.331 [Pipeline] // stage 00:07:46.337 [Pipeline] } 00:07:46.355 [Pipeline] // node 00:07:46.361 [Pipeline] End of Pipeline 00:07:46.401 Finished: SUCCESS