00:00:00.000 Started by upstream project "autotest-per-patch" build number 127157 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.027 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.042 Fetching changes from the remote Git repository 00:00:00.043 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.062 Using shallow fetch with depth 1 00:00:00.062 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.062 > git --version # timeout=10 00:00:00.095 > git --version # 'git version 2.39.2' 00:00:00.095 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.139 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.139 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.334 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.347 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.358 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:03.359 > git config core.sparsecheckout # timeout=10 00:00:03.369 > git read-tree -mu HEAD # timeout=10 00:00:03.386 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:03.405 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:03.405 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:03.520 [Pipeline] Start of Pipeline 00:00:03.539 [Pipeline] library 00:00:03.541 Loading library shm_lib@master 00:00:03.541 Library shm_lib@master is cached. Copying from home. 00:00:03.557 [Pipeline] node 00:00:03.575 Running on WFP29 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.577 [Pipeline] { 00:00:03.588 [Pipeline] catchError 00:00:03.589 [Pipeline] { 00:00:03.602 [Pipeline] wrap 00:00:03.610 [Pipeline] { 00:00:03.616 [Pipeline] stage 00:00:03.617 [Pipeline] { (Prologue) 00:00:03.800 [Pipeline] sh 00:00:04.083 + logger -p user.info -t JENKINS-CI 00:00:04.102 [Pipeline] echo 00:00:04.104 Node: WFP29 00:00:04.111 [Pipeline] sh 00:00:04.408 [Pipeline] setCustomBuildProperty 00:00:04.420 [Pipeline] echo 00:00:04.421 Cleanup processes 00:00:04.427 [Pipeline] sh 00:00:04.709 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.709 788328 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.720 [Pipeline] sh 00:00:04.998 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.998 ++ grep -v 'sudo pgrep' 00:00:04.998 ++ awk '{print $1}' 00:00:04.998 + sudo kill -9 00:00:04.998 + true 00:00:05.011 [Pipeline] cleanWs 00:00:05.021 [WS-CLEANUP] Deleting project workspace... 00:00:05.021 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.027 [WS-CLEANUP] done 00:00:05.031 [Pipeline] setCustomBuildProperty 00:00:05.047 [Pipeline] sh 00:00:05.327 + sudo git config --global --replace-all safe.directory '*' 00:00:05.408 [Pipeline] httpRequest 00:00:05.463 [Pipeline] echo 00:00:05.464 Sorcerer 10.211.164.101 is alive 00:00:05.472 [Pipeline] httpRequest 00:00:05.477 HttpMethod: GET 00:00:05.478 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.479 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.482 Response Code: HTTP/1.1 200 OK 00:00:05.483 Success: Status code 200 is in the accepted range: 200,404 00:00:05.483 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:06.449 [Pipeline] sh 00:00:06.732 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:06.743 [Pipeline] httpRequest 00:00:06.761 [Pipeline] echo 00:00:06.762 Sorcerer 10.211.164.101 is alive 00:00:06.768 [Pipeline] httpRequest 00:00:06.772 HttpMethod: GET 00:00:06.773 URL: http://10.211.164.101/packages/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:06.774 Sending request to url: http://10.211.164.101/packages/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:06.794 Response Code: HTTP/1.1 200 OK 00:00:06.794 Success: Status code 200 is in the accepted range: 200,404 00:00:06.795 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:01:05.980 [Pipeline] sh 00:01:06.264 + tar --no-same-owner -xf spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:01:08.809 [Pipeline] sh 00:01:09.092 + git -C spdk log --oneline -n5 00:01:09.092 86fd5638b autotest: reduce RAID tests runs 00:01:09.092 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:09.092 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:09.092 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:09.092 d005e023b raid: fix empty slot not updated in sb after resize 00:01:09.104 [Pipeline] } 00:01:09.120 [Pipeline] // stage 00:01:09.130 [Pipeline] stage 00:01:09.132 [Pipeline] { (Prepare) 00:01:09.150 [Pipeline] writeFile 00:01:09.168 [Pipeline] sh 00:01:09.451 + logger -p user.info -t JENKINS-CI 00:01:09.464 [Pipeline] sh 00:01:09.749 + logger -p user.info -t JENKINS-CI 00:01:09.761 [Pipeline] sh 00:01:10.046 + cat autorun-spdk.conf 00:01:10.046 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.046 SPDK_TEST_FUZZER_SHORT=1 00:01:10.046 SPDK_TEST_FUZZER=1 00:01:10.046 SPDK_RUN_UBSAN=1 00:01:10.054 RUN_NIGHTLY=0 00:01:10.059 [Pipeline] readFile 00:01:10.086 [Pipeline] withEnv 00:01:10.089 [Pipeline] { 00:01:10.103 [Pipeline] sh 00:01:10.388 + set -ex 00:01:10.388 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:10.388 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.388 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:10.388 ++ SPDK_TEST_FUZZER=1 00:01:10.388 ++ SPDK_RUN_UBSAN=1 00:01:10.388 ++ RUN_NIGHTLY=0 00:01:10.388 + case $SPDK_TEST_NVMF_NICS in 00:01:10.388 + DRIVERS= 00:01:10.388 + [[ -n '' ]] 00:01:10.388 + exit 0 00:01:10.398 [Pipeline] } 00:01:10.418 [Pipeline] // withEnv 00:01:10.424 [Pipeline] } 00:01:10.443 [Pipeline] // stage 00:01:10.455 [Pipeline] catchError 00:01:10.458 [Pipeline] { 00:01:10.475 [Pipeline] timeout 00:01:10.476 Timeout set to expire in 30 min 00:01:10.478 [Pipeline] { 00:01:10.496 [Pipeline] stage 00:01:10.498 [Pipeline] { (Tests) 00:01:10.512 [Pipeline] sh 00:01:10.798 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:10.798 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:10.798 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:10.798 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:10.798 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:10.798 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:10.798 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:10.798 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:10.798 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:10.798 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:10.798 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:10.798 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:10.798 + source /etc/os-release 00:01:10.798 ++ NAME='Fedora Linux' 00:01:10.798 ++ VERSION='38 (Cloud Edition)' 00:01:10.798 ++ ID=fedora 00:01:10.798 ++ VERSION_ID=38 00:01:10.798 ++ VERSION_CODENAME= 00:01:10.799 ++ PLATFORM_ID=platform:f38 00:01:10.799 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.799 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.799 ++ LOGO=fedora-logo-icon 00:01:10.799 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.799 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.799 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.799 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.799 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.799 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.799 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.799 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.799 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.799 ++ SUPPORT_END=2024-05-14 00:01:10.799 ++ VARIANT='Cloud Edition' 00:01:10.799 ++ VARIANT_ID=cloud 00:01:10.799 + uname -a 00:01:10.799 Linux spdk-wfp-29 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.799 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:14.092 Hugepages 00:01:14.092 node hugesize free / total 00:01:14.092 node0 1048576kB 0 / 0 00:01:14.092 node0 2048kB 0 / 0 00:01:14.092 node1 1048576kB 0 / 0 00:01:14.092 node1 2048kB 0 / 0 00:01:14.092 00:01:14.092 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.092 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:14.092 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:14.092 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:14.092 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:14.092 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:14.092 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:14.092 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:14.092 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:14.092 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:14.092 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:14.092 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:14.092 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:14.092 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:14.092 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:14.092 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:14.092 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:14.092 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:14.092 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:01:14.352 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:01:14.352 + rm -f /tmp/spdk-ld-path 00:01:14.352 + source autorun-spdk.conf 00:01:14.352 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.352 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:14.352 ++ SPDK_TEST_FUZZER=1 00:01:14.352 ++ SPDK_RUN_UBSAN=1 00:01:14.352 ++ RUN_NIGHTLY=0 00:01:14.352 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.352 + [[ -n '' ]] 00:01:14.352 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:14.352 + for M in /var/spdk/build-*-manifest.txt 00:01:14.352 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.352 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:14.352 + for M in /var/spdk/build-*-manifest.txt 00:01:14.352 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.352 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:14.352 ++ uname 00:01:14.352 + [[ Linux == \L\i\n\u\x ]] 00:01:14.352 + sudo dmesg -T 00:01:14.352 + sudo dmesg --clear 00:01:14.352 + dmesg_pid=789300 00:01:14.352 + [[ Fedora Linux == FreeBSD ]] 00:01:14.352 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.352 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.352 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.352 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.352 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.352 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.352 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.352 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.352 + sudo dmesg -Tw 00:01:14.352 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.352 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.352 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.352 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.352 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.352 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.352 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.352 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.352 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:14.352 Test configuration: 00:01:14.352 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.352 SPDK_TEST_FUZZER_SHORT=1 00:01:14.352 SPDK_TEST_FUZZER=1 00:01:14.352 SPDK_RUN_UBSAN=1 00:01:14.612 RUN_NIGHTLY=0 11:47:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:14.612 11:47:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.612 11:47:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.612 11:47:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.612 11:47:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.612 11:47:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.612 11:47:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.612 11:47:51 -- paths/export.sh@5 -- $ export PATH 00:01:14.612 11:47:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.612 11:47:51 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:14.612 11:47:51 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:14.612 11:47:51 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721900871.XXXXXX 00:01:14.612 11:47:51 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721900871.O79uOB 00:01:14.612 11:47:51 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:14.612 11:47:51 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:14.612 11:47:51 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:14.612 11:47:51 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.612 11:47:51 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.612 11:47:51 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:14.612 11:47:51 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:14.612 11:47:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.612 11:47:51 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:14.612 11:47:51 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:14.612 11:47:51 -- pm/common@17 -- $ local monitor 00:01:14.612 11:47:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.612 11:47:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.612 11:47:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.612 11:47:51 -- pm/common@21 -- $ date +%s 00:01:14.612 11:47:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.612 11:47:51 -- pm/common@21 -- $ date +%s 00:01:14.612 11:47:51 -- pm/common@25 -- $ sleep 1 00:01:14.612 11:47:51 -- pm/common@21 -- $ date +%s 00:01:14.612 11:47:51 -- pm/common@21 -- $ date +%s 00:01:14.612 11:47:51 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900871 00:01:14.612 11:47:51 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900871 00:01:14.612 11:47:51 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900871 00:01:14.612 11:47:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900871 00:01:14.612 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900871_collect-vmstat.pm.log 00:01:14.612 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900871_collect-cpu-load.pm.log 00:01:14.612 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900871_collect-cpu-temp.pm.log 00:01:14.612 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900871_collect-bmc-pm.bmc.pm.log 00:01:15.550 11:47:52 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:15.550 11:47:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.550 11:47:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.550 11:47:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:15.550 11:47:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.550 Thu Jul 25 09:47:52 AM UTC 2024 00:01:15.550 11:47:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.550 v24.09-pre-322-g86fd5638b 00:01:15.550 11:47:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.550 11:47:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.550 11:47:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.550 11:47:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:15.550 11:47:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:15.550 11:47:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.550 ************************************ 00:01:15.550 START TEST ubsan 00:01:15.550 ************************************ 00:01:15.550 11:47:52 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:15.550 using ubsan 00:01:15.550 00:01:15.550 real 0m0.001s 00:01:15.550 user 0m0.000s 00:01:15.550 sys 0m0.001s 00:01:15.550 11:47:52 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:15.550 11:47:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.550 ************************************ 00:01:15.550 END TEST ubsan 00:01:15.550 ************************************ 00:01:15.810 11:47:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.810 11:47:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.810 11:47:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.810 11:47:52 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:15.810 11:47:52 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:15.810 11:47:52 -- common/autobuild_common.sh@435 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:15.810 11:47:52 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:15.810 11:47:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:15.810 11:47:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.810 ************************************ 00:01:15.810 START TEST autobuild_llvm_precompile 00:01:15.810 ************************************ 00:01:15.810 11:47:52 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:01:15.810 11:47:52 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:16.069 Target: x86_64-redhat-linux-gnu 00:01:16.069 Thread model: posix 00:01:16.069 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:16.069 11:47:53 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:16.638 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:16.638 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:17.575 Using 'verbs' RDMA provider 00:01:33.431 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:48.326 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:48.326 Creating mk/config.mk...done. 00:01:48.326 Creating mk/cc.flags.mk...done. 00:01:48.326 Type 'make' to build. 00:01:48.326 00:01:48.326 real 0m31.754s 00:01:48.326 user 0m13.172s 00:01:48.326 sys 0m18.005s 00:01:48.326 11:48:24 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:48.326 11:48:24 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:48.326 ************************************ 00:01:48.326 END TEST autobuild_llvm_precompile 00:01:48.326 ************************************ 00:01:48.326 11:48:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.326 11:48:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.326 11:48:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.326 11:48:24 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:48.326 11:48:24 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:48.326 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:48.326 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:48.326 Using 'verbs' RDMA provider 00:02:01.478 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:13.687 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:13.946 Creating mk/config.mk...done. 00:02:13.946 Creating mk/cc.flags.mk...done. 00:02:13.946 Type 'make' to build. 00:02:13.946 11:48:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:02:13.946 11:48:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:13.946 11:48:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:13.946 11:48:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.946 ************************************ 00:02:13.946 START TEST make 00:02:13.946 ************************************ 00:02:13.946 11:48:51 make -- common/autotest_common.sh@1125 -- $ make -j72 00:02:14.204 make[1]: Nothing to be done for 'all'. 00:02:16.112 The Meson build system 00:02:16.112 Version: 1.3.1 00:02:16.112 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:16.112 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.112 Build type: native build 00:02:16.112 Project name: libvfio-user 00:02:16.112 Project version: 0.0.1 00:02:16.112 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:16.112 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:16.112 Host machine cpu family: x86_64 00:02:16.112 Host machine cpu: x86_64 00:02:16.112 Run-time dependency threads found: YES 00:02:16.112 Library dl found: YES 00:02:16.112 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:16.112 Run-time dependency json-c found: YES 0.17 00:02:16.112 Run-time dependency cmocka found: YES 1.1.7 00:02:16.112 Program pytest-3 found: NO 00:02:16.112 Program flake8 found: NO 00:02:16.112 Program misspell-fixer found: NO 00:02:16.112 Program restructuredtext-lint found: NO 00:02:16.112 Program valgrind found: YES (/usr/bin/valgrind) 00:02:16.112 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.112 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.112 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.112 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.112 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:16.112 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:16.112 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.112 Build targets in project: 8 00:02:16.112 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:16.112 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:16.112 00:02:16.112 libvfio-user 0.0.1 00:02:16.112 00:02:16.112 User defined options 00:02:16.112 buildtype : debug 00:02:16.112 default_library: static 00:02:16.112 libdir : /usr/local/lib 00:02:16.112 00:02:16.112 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.370 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.370 [1/36] Compiling C object samples/null.p/null.c.o 00:02:16.370 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:16.370 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:16.370 [4/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:16.370 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:16.370 [6/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:16.370 [7/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:16.370 [8/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:16.370 [9/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:16.370 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:16.370 [11/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:16.370 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:16.370 [13/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:16.370 [14/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:16.370 [15/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:16.370 [16/36] Compiling C object samples/server.p/server.c.o 00:02:16.370 [17/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:16.370 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:16.370 [19/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:16.370 [20/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:16.370 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:16.370 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:16.370 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:16.370 [24/36] Compiling C object samples/client.p/client.c.o 00:02:16.370 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:16.370 [26/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:16.370 [27/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:16.628 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:16.628 [29/36] Linking target samples/client 00:02:16.628 [30/36] Linking static target lib/libvfio-user.a 00:02:16.628 [31/36] Linking target test/unit_tests 00:02:16.628 [32/36] Linking target samples/gpio-pci-idio-16 00:02:16.628 [33/36] Linking target samples/lspci 00:02:16.628 [34/36] Linking target samples/server 00:02:16.628 [35/36] Linking target samples/null 00:02:16.628 [36/36] Linking target samples/shadow_ioeventfd_server 00:02:16.628 INFO: autodetecting backend as ninja 00:02:16.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.628 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.887 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.887 ninja: no work to do. 00:02:23.464 The Meson build system 00:02:23.464 Version: 1.3.1 00:02:23.464 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:23.464 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:23.464 Build type: native build 00:02:23.464 Program cat found: YES (/usr/bin/cat) 00:02:23.464 Project name: DPDK 00:02:23.464 Project version: 24.03.0 00:02:23.464 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:23.464 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:23.464 Host machine cpu family: x86_64 00:02:23.464 Host machine cpu: x86_64 00:02:23.464 Message: ## Building in Developer Mode ## 00:02:23.464 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.464 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.464 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.464 Program python3 found: YES (/usr/bin/python3) 00:02:23.464 Program cat found: YES (/usr/bin/cat) 00:02:23.464 Compiler for C supports arguments -march=native: YES 00:02:23.464 Checking for size of "void *" : 8 00:02:23.464 Checking for size of "void *" : 8 (cached) 00:02:23.464 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:23.464 Library m found: YES 00:02:23.464 Library numa found: YES 00:02:23.464 Has header "numaif.h" : YES 00:02:23.464 Library fdt found: NO 00:02:23.464 Library execinfo found: NO 00:02:23.464 Has header "execinfo.h" : YES 00:02:23.464 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:23.464 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.464 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.464 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.464 Run-time dependency openssl found: YES 3.0.9 00:02:23.464 Run-time dependency libpcap found: YES 1.10.4 00:02:23.464 Has header "pcap.h" with dependency libpcap: YES 00:02:23.464 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.464 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.464 Compiler for C supports arguments -Wformat: YES 00:02:23.464 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:23.464 Compiler for C supports arguments -Wformat-security: YES 00:02:23.464 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.464 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.464 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.464 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.464 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.464 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.464 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.464 Compiler for C supports arguments -Wundef: YES 00:02:23.464 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.464 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.464 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:23.464 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.464 Program objdump found: YES (/usr/bin/objdump) 00:02:23.464 Compiler for C supports arguments -mavx512f: YES 00:02:23.464 Checking if "AVX512 checking" compiles: YES 00:02:23.464 Fetching value of define "__SSE4_2__" : 1 00:02:23.464 Fetching value of define "__AES__" : 1 00:02:23.464 Fetching value of define "__AVX__" : 1 00:02:23.464 Fetching value of define "__AVX2__" : 1 00:02:23.464 Fetching value of define "__AVX512BW__" : 1 00:02:23.464 Fetching value of define "__AVX512CD__" : 1 00:02:23.464 Fetching value of define "__AVX512DQ__" : 1 00:02:23.464 Fetching value of define "__AVX512F__" : 1 00:02:23.465 Fetching value of define "__AVX512VL__" : 1 00:02:23.465 Fetching value of define "__PCLMUL__" : 1 00:02:23.465 Fetching value of define "__RDRND__" : 1 00:02:23.465 Fetching value of define "__RDSEED__" : 1 00:02:23.465 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.465 Fetching value of define "__znver1__" : (undefined) 00:02:23.465 Fetching value of define "__znver2__" : (undefined) 00:02:23.465 Fetching value of define "__znver3__" : (undefined) 00:02:23.465 Fetching value of define "__znver4__" : (undefined) 00:02:23.465 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:23.465 Message: lib/log: Defining dependency "log" 00:02:23.465 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.465 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.465 Checking for function "getentropy" : NO 00:02:23.465 Message: lib/eal: Defining dependency "eal" 00:02:23.465 Message: lib/ring: Defining dependency "ring" 00:02:23.465 Message: lib/rcu: Defining dependency "rcu" 00:02:23.465 Message: lib/mempool: Defining dependency "mempool" 00:02:23.465 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.465 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.465 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.465 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.465 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.465 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.465 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.465 Compiler for C supports arguments -mpclmul: YES 00:02:23.465 Compiler for C supports arguments -maes: YES 00:02:23.465 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.465 Compiler for C supports arguments -mavx512bw: YES 00:02:23.465 Compiler for C supports arguments -mavx512dq: YES 00:02:23.465 Compiler for C supports arguments -mavx512vl: YES 00:02:23.465 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.465 Compiler for C supports arguments -mavx2: YES 00:02:23.465 Compiler for C supports arguments -mavx: YES 00:02:23.465 Message: lib/net: Defining dependency "net" 00:02:23.465 Message: lib/meter: Defining dependency "meter" 00:02:23.465 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.465 Message: lib/pci: Defining dependency "pci" 00:02:23.465 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.465 Message: lib/hash: Defining dependency "hash" 00:02:23.465 Message: lib/timer: Defining dependency "timer" 00:02:23.465 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.465 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.465 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.465 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.465 Message: lib/power: Defining dependency "power" 00:02:23.465 Message: lib/reorder: Defining dependency "reorder" 00:02:23.465 Message: lib/security: Defining dependency "security" 00:02:23.465 Has header "linux/userfaultfd.h" : YES 00:02:23.465 Has header "linux/vduse.h" : YES 00:02:23.465 Message: lib/vhost: Defining dependency "vhost" 00:02:23.465 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:23.465 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.465 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.465 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.465 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.465 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.465 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.465 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.465 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.465 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.465 Program doxygen found: YES (/usr/bin/doxygen) 00:02:23.465 Configuring doxy-api-html.conf using configuration 00:02:23.465 Configuring doxy-api-man.conf using configuration 00:02:23.465 Program mandb found: YES (/usr/bin/mandb) 00:02:23.465 Program sphinx-build found: NO 00:02:23.465 Configuring rte_build_config.h using configuration 00:02:23.465 Message: 00:02:23.465 ================= 00:02:23.465 Applications Enabled 00:02:23.465 ================= 00:02:23.465 00:02:23.465 apps: 00:02:23.465 00:02:23.465 00:02:23.465 Message: 00:02:23.465 ================= 00:02:23.465 Libraries Enabled 00:02:23.465 ================= 00:02:23.465 00:02:23.465 libs: 00:02:23.465 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.465 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.465 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.465 00:02:23.465 Message: 00:02:23.465 =============== 00:02:23.465 Drivers Enabled 00:02:23.465 =============== 00:02:23.465 00:02:23.465 common: 00:02:23.465 00:02:23.465 bus: 00:02:23.465 pci, vdev, 00:02:23.465 mempool: 00:02:23.465 ring, 00:02:23.465 dma: 00:02:23.465 00:02:23.465 net: 00:02:23.465 00:02:23.465 crypto: 00:02:23.465 00:02:23.465 compress: 00:02:23.465 00:02:23.465 vdpa: 00:02:23.465 00:02:23.465 00:02:23.465 Message: 00:02:23.465 ================= 00:02:23.465 Content Skipped 00:02:23.465 ================= 00:02:23.465 00:02:23.465 apps: 00:02:23.465 dumpcap: explicitly disabled via build config 00:02:23.465 graph: explicitly disabled via build config 00:02:23.465 pdump: explicitly disabled via build config 00:02:23.465 proc-info: explicitly disabled via build config 00:02:23.465 test-acl: explicitly disabled via build config 00:02:23.465 test-bbdev: explicitly disabled via build config 00:02:23.465 test-cmdline: explicitly disabled via build config 00:02:23.465 test-compress-perf: explicitly disabled via build config 00:02:23.465 test-crypto-perf: explicitly disabled via build config 00:02:23.465 test-dma-perf: explicitly disabled via build config 00:02:23.465 test-eventdev: explicitly disabled via build config 00:02:23.465 test-fib: explicitly disabled via build config 00:02:23.465 test-flow-perf: explicitly disabled via build config 00:02:23.465 test-gpudev: explicitly disabled via build config 00:02:23.465 test-mldev: explicitly disabled via build config 00:02:23.465 test-pipeline: explicitly disabled via build config 00:02:23.465 test-pmd: explicitly disabled via build config 00:02:23.465 test-regex: explicitly disabled via build config 00:02:23.465 test-sad: explicitly disabled via build config 00:02:23.465 test-security-perf: explicitly disabled via build config 00:02:23.465 00:02:23.465 libs: 00:02:23.465 argparse: explicitly disabled via build config 00:02:23.465 metrics: explicitly disabled via build config 00:02:23.465 acl: explicitly disabled via build config 00:02:23.465 bbdev: explicitly disabled via build config 00:02:23.465 bitratestats: explicitly disabled via build config 00:02:23.465 bpf: explicitly disabled via build config 00:02:23.465 cfgfile: explicitly disabled via build config 00:02:23.465 distributor: explicitly disabled via build config 00:02:23.465 efd: explicitly disabled via build config 00:02:23.465 eventdev: explicitly disabled via build config 00:02:23.465 dispatcher: explicitly disabled via build config 00:02:23.465 gpudev: explicitly disabled via build config 00:02:23.465 gro: explicitly disabled via build config 00:02:23.465 gso: explicitly disabled via build config 00:02:23.465 ip_frag: explicitly disabled via build config 00:02:23.465 jobstats: explicitly disabled via build config 00:02:23.465 latencystats: explicitly disabled via build config 00:02:23.465 lpm: explicitly disabled via build config 00:02:23.465 member: explicitly disabled via build config 00:02:23.465 pcapng: explicitly disabled via build config 00:02:23.465 rawdev: explicitly disabled via build config 00:02:23.465 regexdev: explicitly disabled via build config 00:02:23.465 mldev: explicitly disabled via build config 00:02:23.465 rib: explicitly disabled via build config 00:02:23.465 sched: explicitly disabled via build config 00:02:23.465 stack: explicitly disabled via build config 00:02:23.465 ipsec: explicitly disabled via build config 00:02:23.465 pdcp: explicitly disabled via build config 00:02:23.465 fib: explicitly disabled via build config 00:02:23.465 port: explicitly disabled via build config 00:02:23.465 pdump: explicitly disabled via build config 00:02:23.465 table: explicitly disabled via build config 00:02:23.465 pipeline: explicitly disabled via build config 00:02:23.465 graph: explicitly disabled via build config 00:02:23.465 node: explicitly disabled via build config 00:02:23.465 00:02:23.465 drivers: 00:02:23.465 common/cpt: not in enabled drivers build config 00:02:23.465 common/dpaax: not in enabled drivers build config 00:02:23.465 common/iavf: not in enabled drivers build config 00:02:23.465 common/idpf: not in enabled drivers build config 00:02:23.465 common/ionic: not in enabled drivers build config 00:02:23.465 common/mvep: not in enabled drivers build config 00:02:23.465 common/octeontx: not in enabled drivers build config 00:02:23.465 bus/auxiliary: not in enabled drivers build config 00:02:23.465 bus/cdx: not in enabled drivers build config 00:02:23.465 bus/dpaa: not in enabled drivers build config 00:02:23.465 bus/fslmc: not in enabled drivers build config 00:02:23.465 bus/ifpga: not in enabled drivers build config 00:02:23.465 bus/platform: not in enabled drivers build config 00:02:23.465 bus/uacce: not in enabled drivers build config 00:02:23.465 bus/vmbus: not in enabled drivers build config 00:02:23.465 common/cnxk: not in enabled drivers build config 00:02:23.465 common/mlx5: not in enabled drivers build config 00:02:23.465 common/nfp: not in enabled drivers build config 00:02:23.465 common/nitrox: not in enabled drivers build config 00:02:23.465 common/qat: not in enabled drivers build config 00:02:23.465 common/sfc_efx: not in enabled drivers build config 00:02:23.465 mempool/bucket: not in enabled drivers build config 00:02:23.466 mempool/cnxk: not in enabled drivers build config 00:02:23.466 mempool/dpaa: not in enabled drivers build config 00:02:23.466 mempool/dpaa2: not in enabled drivers build config 00:02:23.466 mempool/octeontx: not in enabled drivers build config 00:02:23.466 mempool/stack: not in enabled drivers build config 00:02:23.466 dma/cnxk: not in enabled drivers build config 00:02:23.466 dma/dpaa: not in enabled drivers build config 00:02:23.466 dma/dpaa2: not in enabled drivers build config 00:02:23.466 dma/hisilicon: not in enabled drivers build config 00:02:23.466 dma/idxd: not in enabled drivers build config 00:02:23.466 dma/ioat: not in enabled drivers build config 00:02:23.466 dma/skeleton: not in enabled drivers build config 00:02:23.466 net/af_packet: not in enabled drivers build config 00:02:23.466 net/af_xdp: not in enabled drivers build config 00:02:23.466 net/ark: not in enabled drivers build config 00:02:23.466 net/atlantic: not in enabled drivers build config 00:02:23.466 net/avp: not in enabled drivers build config 00:02:23.466 net/axgbe: not in enabled drivers build config 00:02:23.466 net/bnx2x: not in enabled drivers build config 00:02:23.466 net/bnxt: not in enabled drivers build config 00:02:23.466 net/bonding: not in enabled drivers build config 00:02:23.466 net/cnxk: not in enabled drivers build config 00:02:23.466 net/cpfl: not in enabled drivers build config 00:02:23.466 net/cxgbe: not in enabled drivers build config 00:02:23.466 net/dpaa: not in enabled drivers build config 00:02:23.466 net/dpaa2: not in enabled drivers build config 00:02:23.466 net/e1000: not in enabled drivers build config 00:02:23.466 net/ena: not in enabled drivers build config 00:02:23.466 net/enetc: not in enabled drivers build config 00:02:23.466 net/enetfec: not in enabled drivers build config 00:02:23.466 net/enic: not in enabled drivers build config 00:02:23.466 net/failsafe: not in enabled drivers build config 00:02:23.466 net/fm10k: not in enabled drivers build config 00:02:23.466 net/gve: not in enabled drivers build config 00:02:23.466 net/hinic: not in enabled drivers build config 00:02:23.466 net/hns3: not in enabled drivers build config 00:02:23.466 net/i40e: not in enabled drivers build config 00:02:23.466 net/iavf: not in enabled drivers build config 00:02:23.466 net/ice: not in enabled drivers build config 00:02:23.466 net/idpf: not in enabled drivers build config 00:02:23.466 net/igc: not in enabled drivers build config 00:02:23.466 net/ionic: not in enabled drivers build config 00:02:23.466 net/ipn3ke: not in enabled drivers build config 00:02:23.466 net/ixgbe: not in enabled drivers build config 00:02:23.466 net/mana: not in enabled drivers build config 00:02:23.466 net/memif: not in enabled drivers build config 00:02:23.466 net/mlx4: not in enabled drivers build config 00:02:23.466 net/mlx5: not in enabled drivers build config 00:02:23.466 net/mvneta: not in enabled drivers build config 00:02:23.466 net/mvpp2: not in enabled drivers build config 00:02:23.466 net/netvsc: not in enabled drivers build config 00:02:23.466 net/nfb: not in enabled drivers build config 00:02:23.466 net/nfp: not in enabled drivers build config 00:02:23.466 net/ngbe: not in enabled drivers build config 00:02:23.466 net/null: not in enabled drivers build config 00:02:23.466 net/octeontx: not in enabled drivers build config 00:02:23.466 net/octeon_ep: not in enabled drivers build config 00:02:23.466 net/pcap: not in enabled drivers build config 00:02:23.466 net/pfe: not in enabled drivers build config 00:02:23.466 net/qede: not in enabled drivers build config 00:02:23.466 net/ring: not in enabled drivers build config 00:02:23.466 net/sfc: not in enabled drivers build config 00:02:23.466 net/softnic: not in enabled drivers build config 00:02:23.466 net/tap: not in enabled drivers build config 00:02:23.466 net/thunderx: not in enabled drivers build config 00:02:23.466 net/txgbe: not in enabled drivers build config 00:02:23.466 net/vdev_netvsc: not in enabled drivers build config 00:02:23.466 net/vhost: not in enabled drivers build config 00:02:23.466 net/virtio: not in enabled drivers build config 00:02:23.466 net/vmxnet3: not in enabled drivers build config 00:02:23.466 raw/*: missing internal dependency, "rawdev" 00:02:23.466 crypto/armv8: not in enabled drivers build config 00:02:23.466 crypto/bcmfs: not in enabled drivers build config 00:02:23.466 crypto/caam_jr: not in enabled drivers build config 00:02:23.466 crypto/ccp: not in enabled drivers build config 00:02:23.466 crypto/cnxk: not in enabled drivers build config 00:02:23.466 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.466 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.466 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.466 crypto/mlx5: not in enabled drivers build config 00:02:23.466 crypto/mvsam: not in enabled drivers build config 00:02:23.466 crypto/nitrox: not in enabled drivers build config 00:02:23.466 crypto/null: not in enabled drivers build config 00:02:23.466 crypto/octeontx: not in enabled drivers build config 00:02:23.466 crypto/openssl: not in enabled drivers build config 00:02:23.466 crypto/scheduler: not in enabled drivers build config 00:02:23.466 crypto/uadk: not in enabled drivers build config 00:02:23.466 crypto/virtio: not in enabled drivers build config 00:02:23.466 compress/isal: not in enabled drivers build config 00:02:23.466 compress/mlx5: not in enabled drivers build config 00:02:23.466 compress/nitrox: not in enabled drivers build config 00:02:23.466 compress/octeontx: not in enabled drivers build config 00:02:23.466 compress/zlib: not in enabled drivers build config 00:02:23.466 regex/*: missing internal dependency, "regexdev" 00:02:23.466 ml/*: missing internal dependency, "mldev" 00:02:23.466 vdpa/ifc: not in enabled drivers build config 00:02:23.466 vdpa/mlx5: not in enabled drivers build config 00:02:23.466 vdpa/nfp: not in enabled drivers build config 00:02:23.466 vdpa/sfc: not in enabled drivers build config 00:02:23.466 event/*: missing internal dependency, "eventdev" 00:02:23.466 baseband/*: missing internal dependency, "bbdev" 00:02:23.466 gpu/*: missing internal dependency, "gpudev" 00:02:23.466 00:02:23.466 00:02:23.466 Build targets in project: 85 00:02:23.466 00:02:23.466 DPDK 24.03.0 00:02:23.466 00:02:23.466 User defined options 00:02:23.466 buildtype : debug 00:02:23.466 default_library : static 00:02:23.466 libdir : lib 00:02:23.466 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:23.466 c_args : -fPIC -Werror 00:02:23.466 c_link_args : 00:02:23.466 cpu_instruction_set: native 00:02:23.466 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:23.466 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:23.466 enable_docs : false 00:02:23.466 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.466 enable_kmods : false 00:02:23.466 max_lcores : 128 00:02:23.466 tests : false 00:02:23.466 00:02:23.466 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.466 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:23.466 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.466 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.466 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.466 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.466 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.466 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.762 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.762 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.762 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.762 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.762 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.762 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.762 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.762 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.762 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.762 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.762 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.762 [18/268] Linking static target lib/librte_kvargs.a 00:02:23.762 [19/268] Linking static target lib/librte_log.a 00:02:24.065 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.066 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.066 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.066 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.066 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.066 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.066 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.066 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.066 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.066 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.066 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.066 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.066 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.066 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.066 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.066 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.066 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.066 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.066 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.066 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.066 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.066 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.066 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.066 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.066 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.066 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.066 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.066 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.066 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.066 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.066 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.066 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.066 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.066 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.066 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.066 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.066 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.066 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.066 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.066 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.066 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.066 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.066 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.066 [63/268] Linking static target lib/librte_telemetry.a 00:02:24.066 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.066 [65/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.066 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.327 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.327 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.327 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.327 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.327 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.327 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.327 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.327 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.327 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.327 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.327 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.327 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.327 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.327 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.327 [81/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.327 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.327 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.327 [84/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.327 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.327 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.327 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.327 [88/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:24.327 [89/268] Linking static target lib/librte_pci.a 00:02:24.327 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.327 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.327 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.327 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.327 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.327 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.327 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.327 [97/268] Linking static target lib/librte_ring.a 00:02:24.327 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.327 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.327 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.327 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.327 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.327 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.327 [104/268] Linking static target lib/librte_eal.a 00:02:24.327 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.327 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.327 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.327 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.327 [109/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.327 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.327 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.327 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.327 [113/268] Linking static target lib/librte_mempool.a 00:02:24.327 [114/268] Linking static target lib/librte_rcu.a 00:02:24.327 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.327 [116/268] Linking target lib/librte_log.so.24.1 00:02:24.588 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.588 [118/268] Linking static target lib/librte_mbuf.a 00:02:24.588 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.588 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.588 [121/268] Linking static target lib/librte_net.a 00:02:24.588 [122/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.588 [123/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.588 [124/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.588 [125/268] Linking target lib/librte_kvargs.so.24.1 00:02:24.588 [126/268] Linking static target lib/librte_meter.a 00:02:24.588 [127/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.588 [128/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.588 [129/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.588 [130/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.846 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.846 [132/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.846 [133/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.846 [134/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.846 [135/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.846 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.846 [137/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.846 [138/268] Linking static target lib/librte_timer.a 00:02:24.846 [139/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.846 [140/268] Linking static target lib/librte_cmdline.a 00:02:24.846 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.846 [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.846 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.846 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.846 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.846 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.846 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.846 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:24.846 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.846 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.846 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.846 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.846 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.846 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.846 [155/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:24.846 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.847 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.847 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.847 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.847 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.847 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.847 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.847 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.847 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.847 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.847 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.847 [167/268] Linking static target lib/librte_compressdev.a 00:02:24.847 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.847 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.847 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.847 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.847 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.847 [173/268] Linking static target lib/librte_power.a 00:02:24.847 [174/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.847 [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.847 [176/268] Linking static target lib/librte_dmadev.a 00:02:24.847 [177/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.847 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.847 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.847 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.847 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.847 [182/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.847 [183/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.847 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.847 [185/268] Linking static target lib/librte_security.a 00:02:24.847 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.847 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.107 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.107 [189/268] Linking static target lib/librte_reorder.a 00:02:25.107 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:25.107 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.107 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.107 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.107 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.107 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.107 [196/268] Linking static target lib/librte_hash.a 00:02:25.107 [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.107 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.107 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.107 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.107 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.107 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.107 [203/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.107 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.107 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.107 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.107 [207/268] Linking static target lib/librte_cryptodev.a 00:02:25.107 [208/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.107 [209/268] Linking static target drivers/librte_bus_vdev.a 00:02:25.107 [210/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.107 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:25.107 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.107 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.107 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.107 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.367 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.367 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.367 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.626 [219/268] Linking static target lib/librte_ethdev.a 00:02:25.626 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.626 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.626 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.626 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.886 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.145 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.145 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.145 [227/268] Linking static target lib/librte_vhost.a 00:02:26.145 [228/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.145 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.522 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.460 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.581 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.518 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.518 [234/268] Linking target lib/librte_eal.so.24.1 00:02:37.776 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:37.776 [236/268] Linking target lib/librte_ring.so.24.1 00:02:37.776 [237/268] Linking target lib/librte_timer.so.24.1 00:02:37.776 [238/268] Linking target lib/librte_meter.so.24.1 00:02:37.776 [239/268] Linking target lib/librte_pci.so.24.1 00:02:37.776 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:37.776 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:37.776 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:37.776 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:37.776 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:37.776 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:37.776 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:38.034 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:38.035 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:38.035 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:38.035 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:38.035 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:38.035 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:38.035 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:38.293 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:38.293 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:38.293 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:38.293 [257/268] Linking target lib/librte_net.so.24.1 00:02:38.293 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:38.553 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:38.553 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:38.553 [261/268] Linking target lib/librte_security.so.24.1 00:02:38.553 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:38.553 [263/268] Linking target lib/librte_hash.so.24.1 00:02:38.553 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:38.813 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:38.813 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:38.813 [267/268] Linking target lib/librte_power.so.24.1 00:02:38.813 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:38.813 INFO: autodetecting backend as ninja 00:02:38.813 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:39.749 CC lib/log/log.o 00:02:39.749 CC lib/log/log_flags.o 00:02:39.749 CC lib/log/log_deprecated.o 00:02:39.749 CC lib/ut/ut.o 00:02:39.749 CC lib/ut_mock/mock.o 00:02:40.008 LIB libspdk_ut.a 00:02:40.008 LIB libspdk_log.a 00:02:40.008 LIB libspdk_ut_mock.a 00:02:40.266 CC lib/ioat/ioat.o 00:02:40.266 CC lib/dma/dma.o 00:02:40.266 CXX lib/trace_parser/trace.o 00:02:40.266 CC lib/util/bit_array.o 00:02:40.266 CC lib/util/base64.o 00:02:40.266 CC lib/util/cpuset.o 00:02:40.266 CC lib/util/crc16.o 00:02:40.266 CC lib/util/crc32.o 00:02:40.266 CC lib/util/crc32c.o 00:02:40.266 CC lib/util/crc32_ieee.o 00:02:40.266 CC lib/util/crc64.o 00:02:40.266 CC lib/util/dif.o 00:02:40.266 CC lib/util/fd.o 00:02:40.266 CC lib/util/fd_group.o 00:02:40.266 CC lib/util/file.o 00:02:40.266 CC lib/util/math.o 00:02:40.266 CC lib/util/hexlify.o 00:02:40.266 CC lib/util/iov.o 00:02:40.266 CC lib/util/net.o 00:02:40.266 CC lib/util/pipe.o 00:02:40.266 CC lib/util/strerror_tls.o 00:02:40.266 CC lib/util/string.o 00:02:40.266 CC lib/util/uuid.o 00:02:40.266 CC lib/util/xor.o 00:02:40.266 CC lib/util/zipf.o 00:02:40.525 CC lib/vfio_user/host/vfio_user_pci.o 00:02:40.525 CC lib/vfio_user/host/vfio_user.o 00:02:40.525 LIB libspdk_dma.a 00:02:40.525 LIB libspdk_ioat.a 00:02:40.525 LIB libspdk_vfio_user.a 00:02:40.525 LIB libspdk_util.a 00:02:40.783 LIB libspdk_trace_parser.a 00:02:41.042 CC lib/json/json_parse.o 00:02:41.042 CC lib/json/json_util.o 00:02:41.042 CC lib/json/json_write.o 00:02:41.042 CC lib/env_dpdk/memory.o 00:02:41.042 CC lib/env_dpdk/env.o 00:02:41.042 CC lib/conf/conf.o 00:02:41.042 CC lib/env_dpdk/pci.o 00:02:41.042 CC lib/env_dpdk/init.o 00:02:41.042 CC lib/env_dpdk/threads.o 00:02:41.042 CC lib/rdma_provider/common.o 00:02:41.042 CC lib/env_dpdk/pci_ioat.o 00:02:41.042 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:41.042 CC lib/env_dpdk/pci_virtio.o 00:02:41.042 CC lib/env_dpdk/pci_vmd.o 00:02:41.042 CC lib/env_dpdk/pci_event.o 00:02:41.042 CC lib/env_dpdk/pci_idxd.o 00:02:41.042 CC lib/env_dpdk/sigbus_handler.o 00:02:41.042 CC lib/vmd/vmd.o 00:02:41.042 CC lib/vmd/led.o 00:02:41.042 CC lib/idxd/idxd.o 00:02:41.042 CC lib/env_dpdk/pci_dpdk.o 00:02:41.042 CC lib/idxd/idxd_user.o 00:02:41.042 CC lib/rdma_utils/rdma_utils.o 00:02:41.042 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:41.042 CC lib/idxd/idxd_kernel.o 00:02:41.042 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:41.042 LIB libspdk_rdma_provider.a 00:02:41.042 LIB libspdk_conf.a 00:02:41.042 LIB libspdk_json.a 00:02:41.301 LIB libspdk_rdma_utils.a 00:02:41.301 LIB libspdk_idxd.a 00:02:41.301 LIB libspdk_vmd.a 00:02:41.560 CC lib/jsonrpc/jsonrpc_server.o 00:02:41.560 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:41.560 CC lib/jsonrpc/jsonrpc_client.o 00:02:41.560 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:41.560 LIB libspdk_jsonrpc.a 00:02:41.819 LIB libspdk_env_dpdk.a 00:02:42.078 CC lib/rpc/rpc.o 00:02:42.078 LIB libspdk_rpc.a 00:02:42.647 CC lib/keyring/keyring.o 00:02:42.647 CC lib/keyring/keyring_rpc.o 00:02:42.647 CC lib/trace/trace.o 00:02:42.647 CC lib/trace/trace_flags.o 00:02:42.647 CC lib/notify/notify.o 00:02:42.647 CC lib/trace/trace_rpc.o 00:02:42.647 CC lib/notify/notify_rpc.o 00:02:42.647 LIB libspdk_notify.a 00:02:42.647 LIB libspdk_keyring.a 00:02:42.647 LIB libspdk_trace.a 00:02:42.906 CC lib/sock/sock.o 00:02:42.906 CC lib/sock/sock_rpc.o 00:02:42.906 CC lib/thread/thread.o 00:02:42.906 CC lib/thread/iobuf.o 00:02:43.165 LIB libspdk_sock.a 00:02:43.733 CC lib/nvme/nvme_ctrlr.o 00:02:43.733 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:43.733 CC lib/nvme/nvme_fabric.o 00:02:43.733 CC lib/nvme/nvme_ns_cmd.o 00:02:43.733 CC lib/nvme/nvme_ns.o 00:02:43.733 CC lib/nvme/nvme_pcie_common.o 00:02:43.733 CC lib/nvme/nvme_pcie.o 00:02:43.733 CC lib/nvme/nvme_qpair.o 00:02:43.733 CC lib/nvme/nvme.o 00:02:43.733 CC lib/nvme/nvme_quirks.o 00:02:43.733 CC lib/nvme/nvme_transport.o 00:02:43.733 CC lib/nvme/nvme_discovery.o 00:02:43.733 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:43.733 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:43.733 CC lib/nvme/nvme_tcp.o 00:02:43.733 CC lib/nvme/nvme_opal.o 00:02:43.733 CC lib/nvme/nvme_io_msg.o 00:02:43.733 CC lib/nvme/nvme_poll_group.o 00:02:43.733 CC lib/nvme/nvme_zns.o 00:02:43.733 CC lib/nvme/nvme_stubs.o 00:02:43.733 CC lib/nvme/nvme_auth.o 00:02:43.733 CC lib/nvme/nvme_cuse.o 00:02:43.733 CC lib/nvme/nvme_vfio_user.o 00:02:43.733 CC lib/nvme/nvme_rdma.o 00:02:43.733 LIB libspdk_thread.a 00:02:44.300 CC lib/virtio/virtio.o 00:02:44.300 CC lib/init/json_config.o 00:02:44.300 CC lib/init/subsystem.o 00:02:44.300 CC lib/init/rpc.o 00:02:44.300 CC lib/virtio/virtio_vfio_user.o 00:02:44.300 CC lib/vfu_tgt/tgt_endpoint.o 00:02:44.300 CC lib/virtio/virtio_vhost_user.o 00:02:44.300 CC lib/init/subsystem_rpc.o 00:02:44.300 CC lib/vfu_tgt/tgt_rpc.o 00:02:44.300 CC lib/virtio/virtio_pci.o 00:02:44.300 CC lib/accel/accel.o 00:02:44.300 CC lib/accel/accel_rpc.o 00:02:44.300 CC lib/accel/accel_sw.o 00:02:44.300 CC lib/blob/blobstore.o 00:02:44.300 CC lib/blob/request.o 00:02:44.300 CC lib/blob/zeroes.o 00:02:44.300 CC lib/blob/blob_bs_dev.o 00:02:44.300 LIB libspdk_init.a 00:02:44.300 LIB libspdk_virtio.a 00:02:44.300 LIB libspdk_vfu_tgt.a 00:02:44.559 CC lib/event/app.o 00:02:44.559 CC lib/event/reactor.o 00:02:44.559 CC lib/event/log_rpc.o 00:02:44.559 CC lib/event/app_rpc.o 00:02:44.559 CC lib/event/scheduler_static.o 00:02:44.819 LIB libspdk_accel.a 00:02:44.819 LIB libspdk_event.a 00:02:45.078 LIB libspdk_nvme.a 00:02:45.078 CC lib/bdev/bdev_rpc.o 00:02:45.078 CC lib/bdev/bdev.o 00:02:45.078 CC lib/bdev/bdev_zone.o 00:02:45.078 CC lib/bdev/part.o 00:02:45.078 CC lib/bdev/scsi_nvme.o 00:02:46.016 LIB libspdk_blob.a 00:02:46.284 CC lib/blobfs/blobfs.o 00:02:46.284 CC lib/blobfs/tree.o 00:02:46.284 CC lib/lvol/lvol.o 00:02:46.853 LIB libspdk_lvol.a 00:02:46.853 LIB libspdk_blobfs.a 00:02:46.853 LIB libspdk_bdev.a 00:02:47.113 CC lib/scsi/dev.o 00:02:47.113 CC lib/scsi/lun.o 00:02:47.113 CC lib/scsi/port.o 00:02:47.113 CC lib/scsi/scsi.o 00:02:47.113 CC lib/scsi/scsi_bdev.o 00:02:47.113 CC lib/scsi/scsi_pr.o 00:02:47.372 CC lib/scsi/scsi_rpc.o 00:02:47.372 CC lib/scsi/task.o 00:02:47.372 CC lib/nvmf/ctrlr_discovery.o 00:02:47.372 CC lib/nvmf/ctrlr.o 00:02:47.372 CC lib/nvmf/ctrlr_bdev.o 00:02:47.372 CC lib/nvmf/subsystem.o 00:02:47.372 CC lib/nvmf/nvmf_rpc.o 00:02:47.372 CC lib/nvmf/nvmf.o 00:02:47.372 CC lib/nvmf/transport.o 00:02:47.372 CC lib/nvmf/tcp.o 00:02:47.372 CC lib/nvmf/mdns_server.o 00:02:47.372 CC lib/ublk/ublk.o 00:02:47.372 CC lib/nvmf/stubs.o 00:02:47.372 CC lib/nvmf/vfio_user.o 00:02:47.372 CC lib/ublk/ublk_rpc.o 00:02:47.372 CC lib/nbd/nbd.o 00:02:47.372 CC lib/nvmf/auth.o 00:02:47.372 CC lib/nvmf/rdma.o 00:02:47.372 CC lib/nbd/nbd_rpc.o 00:02:47.372 CC lib/ftl/ftl_core.o 00:02:47.372 CC lib/ftl/ftl_init.o 00:02:47.372 CC lib/ftl/ftl_layout.o 00:02:47.372 CC lib/ftl/ftl_debug.o 00:02:47.372 CC lib/ftl/ftl_sb.o 00:02:47.372 CC lib/ftl/ftl_io.o 00:02:47.372 CC lib/ftl/ftl_l2p.o 00:02:47.372 CC lib/ftl/ftl_l2p_flat.o 00:02:47.372 CC lib/ftl/ftl_nv_cache.o 00:02:47.372 CC lib/ftl/ftl_band.o 00:02:47.372 CC lib/ftl/ftl_band_ops.o 00:02:47.372 CC lib/ftl/ftl_rq.o 00:02:47.372 CC lib/ftl/ftl_writer.o 00:02:47.372 CC lib/ftl/ftl_reloc.o 00:02:47.372 CC lib/ftl/ftl_l2p_cache.o 00:02:47.372 CC lib/ftl/ftl_p2l.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:47.372 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:47.372 CC lib/ftl/utils/ftl_conf.o 00:02:47.372 CC lib/ftl/utils/ftl_md.o 00:02:47.372 CC lib/ftl/utils/ftl_mempool.o 00:02:47.372 CC lib/ftl/utils/ftl_bitmap.o 00:02:47.372 CC lib/ftl/utils/ftl_property.o 00:02:47.372 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:47.372 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:47.372 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:47.372 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:47.372 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:47.372 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:47.372 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:47.372 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:47.372 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:47.372 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:47.372 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:47.372 CC lib/ftl/base/ftl_base_dev.o 00:02:47.372 CC lib/ftl/base/ftl_base_bdev.o 00:02:47.372 CC lib/ftl/ftl_trace.o 00:02:47.630 LIB libspdk_scsi.a 00:02:47.960 LIB libspdk_nbd.a 00:02:47.960 LIB libspdk_ublk.a 00:02:47.960 CC lib/iscsi/conn.o 00:02:47.960 CC lib/iscsi/init_grp.o 00:02:47.960 CC lib/vhost/vhost.o 00:02:47.960 CC lib/vhost/vhost_rpc.o 00:02:47.960 CC lib/iscsi/iscsi.o 00:02:47.960 CC lib/iscsi/md5.o 00:02:47.960 CC lib/vhost/vhost_scsi.o 00:02:47.960 LIB libspdk_ftl.a 00:02:47.960 CC lib/iscsi/param.o 00:02:47.960 CC lib/vhost/vhost_blk.o 00:02:47.960 CC lib/iscsi/portal_grp.o 00:02:47.960 CC lib/vhost/rte_vhost_user.o 00:02:47.960 CC lib/iscsi/tgt_node.o 00:02:47.960 CC lib/iscsi/iscsi_subsystem.o 00:02:47.960 CC lib/iscsi/task.o 00:02:47.960 CC lib/iscsi/iscsi_rpc.o 00:02:48.528 LIB libspdk_nvmf.a 00:02:48.788 LIB libspdk_vhost.a 00:02:49.047 LIB libspdk_iscsi.a 00:02:49.306 CC module/env_dpdk/env_dpdk_rpc.o 00:02:49.306 CC module/vfu_device/vfu_virtio.o 00:02:49.306 CC module/vfu_device/vfu_virtio_blk.o 00:02:49.306 CC module/vfu_device/vfu_virtio_scsi.o 00:02:49.306 CC module/vfu_device/vfu_virtio_rpc.o 00:02:49.565 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:49.565 LIB libspdk_env_dpdk_rpc.a 00:02:49.565 CC module/blob/bdev/blob_bdev.o 00:02:49.565 CC module/accel/iaa/accel_iaa.o 00:02:49.565 CC module/accel/iaa/accel_iaa_rpc.o 00:02:49.565 CC module/accel/error/accel_error_rpc.o 00:02:49.565 CC module/accel/error/accel_error.o 00:02:49.565 CC module/accel/dsa/accel_dsa.o 00:02:49.565 CC module/accel/dsa/accel_dsa_rpc.o 00:02:49.565 CC module/keyring/file/keyring.o 00:02:49.565 CC module/keyring/file/keyring_rpc.o 00:02:49.565 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:49.565 CC module/accel/ioat/accel_ioat.o 00:02:49.565 CC module/accel/ioat/accel_ioat_rpc.o 00:02:49.565 CC module/scheduler/gscheduler/gscheduler.o 00:02:49.565 CC module/keyring/linux/keyring.o 00:02:49.566 CC module/keyring/linux/keyring_rpc.o 00:02:49.566 CC module/sock/posix/posix.o 00:02:49.566 LIB libspdk_scheduler_dynamic.a 00:02:49.566 LIB libspdk_accel_error.a 00:02:49.566 LIB libspdk_keyring_file.a 00:02:49.566 LIB libspdk_scheduler_gscheduler.a 00:02:49.566 LIB libspdk_keyring_linux.a 00:02:49.566 LIB libspdk_scheduler_dpdk_governor.a 00:02:49.566 LIB libspdk_accel_iaa.a 00:02:49.566 LIB libspdk_accel_ioat.a 00:02:49.825 LIB libspdk_blob_bdev.a 00:02:49.825 LIB libspdk_accel_dsa.a 00:02:49.825 LIB libspdk_vfu_device.a 00:02:50.083 LIB libspdk_sock_posix.a 00:02:50.083 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:50.083 CC module/bdev/delay/vbdev_delay.o 00:02:50.083 CC module/bdev/malloc/bdev_malloc.o 00:02:50.083 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:50.083 CC module/bdev/nvme/bdev_nvme.o 00:02:50.083 CC module/bdev/nvme/nvme_rpc.o 00:02:50.083 CC module/bdev/nvme/bdev_mdns_client.o 00:02:50.083 CC module/bdev/nvme/vbdev_opal.o 00:02:50.083 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:50.083 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:50.083 CC module/bdev/error/vbdev_error.o 00:02:50.083 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:50.083 CC module/bdev/error/vbdev_error_rpc.o 00:02:50.083 CC module/bdev/split/vbdev_split.o 00:02:50.083 CC module/blobfs/bdev/blobfs_bdev.o 00:02:50.083 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:50.083 CC module/bdev/split/vbdev_split_rpc.o 00:02:50.083 CC module/bdev/raid/bdev_raid.o 00:02:50.083 CC module/bdev/raid/bdev_raid_sb.o 00:02:50.083 CC module/bdev/gpt/vbdev_gpt.o 00:02:50.083 CC module/bdev/raid/bdev_raid_rpc.o 00:02:50.083 CC module/bdev/gpt/gpt.o 00:02:50.083 CC module/bdev/passthru/vbdev_passthru.o 00:02:50.083 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:50.083 CC module/bdev/raid/raid0.o 00:02:50.083 CC module/bdev/raid/raid1.o 00:02:50.083 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:50.083 CC module/bdev/raid/concat.o 00:02:50.083 CC module/bdev/aio/bdev_aio.o 00:02:50.083 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:50.083 CC module/bdev/lvol/vbdev_lvol.o 00:02:50.083 CC module/bdev/aio/bdev_aio_rpc.o 00:02:50.083 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:50.083 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:50.083 CC module/bdev/iscsi/bdev_iscsi.o 00:02:50.083 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:50.083 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:50.083 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:50.083 CC module/bdev/null/bdev_null.o 00:02:50.083 CC module/bdev/null/bdev_null_rpc.o 00:02:50.083 CC module/bdev/ftl/bdev_ftl.o 00:02:50.083 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:50.342 LIB libspdk_bdev_error.a 00:02:50.342 LIB libspdk_blobfs_bdev.a 00:02:50.342 LIB libspdk_bdev_passthru.a 00:02:50.342 LIB libspdk_bdev_aio.a 00:02:50.342 LIB libspdk_bdev_split.a 00:02:50.342 LIB libspdk_bdev_delay.a 00:02:50.342 LIB libspdk_bdev_malloc.a 00:02:50.342 LIB libspdk_bdev_gpt.a 00:02:50.601 LIB libspdk_bdev_ftl.a 00:02:50.601 LIB libspdk_bdev_iscsi.a 00:02:50.601 LIB libspdk_bdev_zone_block.a 00:02:50.601 LIB libspdk_bdev_null.a 00:02:50.601 LIB libspdk_bdev_lvol.a 00:02:50.601 LIB libspdk_bdev_virtio.a 00:02:50.860 LIB libspdk_bdev_raid.a 00:02:51.429 LIB libspdk_bdev_nvme.a 00:02:51.998 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.998 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.998 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:51.998 CC module/event/subsystems/vmd/vmd.o 00:02:51.998 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.998 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.998 CC module/event/subsystems/sock/sock.o 00:02:51.998 CC module/event/subsystems/keyring/keyring.o 00:02:51.998 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.258 LIB libspdk_event_vfu_tgt.a 00:02:52.258 LIB libspdk_event_keyring.a 00:02:52.258 LIB libspdk_event_vhost_blk.a 00:02:52.258 LIB libspdk_event_vmd.a 00:02:52.258 LIB libspdk_event_iobuf.a 00:02:52.258 LIB libspdk_event_scheduler.a 00:02:52.258 LIB libspdk_event_sock.a 00:02:52.517 CC module/event/subsystems/accel/accel.o 00:02:52.776 LIB libspdk_event_accel.a 00:02:53.035 CC module/event/subsystems/bdev/bdev.o 00:02:53.035 LIB libspdk_event_bdev.a 00:02:53.603 CC module/event/subsystems/ublk/ublk.o 00:02:53.603 CC module/event/subsystems/scsi/scsi.o 00:02:53.603 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.603 CC module/event/subsystems/nbd/nbd.o 00:02:53.603 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.603 LIB libspdk_event_ublk.a 00:02:53.603 LIB libspdk_event_nbd.a 00:02:53.603 LIB libspdk_event_scsi.a 00:02:53.603 LIB libspdk_event_nvmf.a 00:02:53.862 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.862 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.121 LIB libspdk_event_vhost_scsi.a 00:02:54.121 LIB libspdk_event_iscsi.a 00:02:54.381 CXX app/trace/trace.o 00:02:54.381 CC app/trace_record/trace_record.o 00:02:54.381 TEST_HEADER include/spdk/accel.h 00:02:54.381 TEST_HEADER include/spdk/accel_module.h 00:02:54.381 TEST_HEADER include/spdk/barrier.h 00:02:54.381 TEST_HEADER include/spdk/assert.h 00:02:54.381 CC app/spdk_nvme_perf/perf.o 00:02:54.381 CC test/rpc_client/rpc_client_test.o 00:02:54.381 TEST_HEADER include/spdk/bdev.h 00:02:54.381 TEST_HEADER include/spdk/base64.h 00:02:54.381 TEST_HEADER include/spdk/bdev_module.h 00:02:54.381 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.381 TEST_HEADER include/spdk/bit_array.h 00:02:54.381 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.381 TEST_HEADER include/spdk/bit_pool.h 00:02:54.381 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.381 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.381 TEST_HEADER include/spdk/blobfs.h 00:02:54.381 CC app/spdk_nvme_identify/identify.o 00:02:54.381 TEST_HEADER include/spdk/blob.h 00:02:54.381 TEST_HEADER include/spdk/config.h 00:02:54.381 TEST_HEADER include/spdk/conf.h 00:02:54.381 TEST_HEADER include/spdk/cpuset.h 00:02:54.381 TEST_HEADER include/spdk/crc32.h 00:02:54.381 TEST_HEADER include/spdk/crc16.h 00:02:54.381 CC app/spdk_lspci/spdk_lspci.o 00:02:54.381 TEST_HEADER include/spdk/crc64.h 00:02:54.381 TEST_HEADER include/spdk/dif.h 00:02:54.381 TEST_HEADER include/spdk/dma.h 00:02:54.381 CC app/spdk_top/spdk_top.o 00:02:54.381 TEST_HEADER include/spdk/endian.h 00:02:54.381 TEST_HEADER include/spdk/event.h 00:02:54.381 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.381 TEST_HEADER include/spdk/env.h 00:02:54.381 TEST_HEADER include/spdk/fd_group.h 00:02:54.381 TEST_HEADER include/spdk/fd.h 00:02:54.381 TEST_HEADER include/spdk/file.h 00:02:54.381 TEST_HEADER include/spdk/ftl.h 00:02:54.381 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.381 TEST_HEADER include/spdk/hexlify.h 00:02:54.381 TEST_HEADER include/spdk/histogram_data.h 00:02:54.381 TEST_HEADER include/spdk/idxd.h 00:02:54.381 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.381 TEST_HEADER include/spdk/init.h 00:02:54.381 TEST_HEADER include/spdk/ioat.h 00:02:54.381 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.381 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.381 TEST_HEADER include/spdk/json.h 00:02:54.381 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:54.381 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.381 TEST_HEADER include/spdk/keyring.h 00:02:54.381 TEST_HEADER include/spdk/keyring_module.h 00:02:54.381 TEST_HEADER include/spdk/likely.h 00:02:54.381 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.381 TEST_HEADER include/spdk/lvol.h 00:02:54.381 TEST_HEADER include/spdk/log.h 00:02:54.381 TEST_HEADER include/spdk/memory.h 00:02:54.381 TEST_HEADER include/spdk/mmio.h 00:02:54.381 TEST_HEADER include/spdk/nbd.h 00:02:54.381 TEST_HEADER include/spdk/net.h 00:02:54.381 TEST_HEADER include/spdk/notify.h 00:02:54.381 CC app/spdk_dd/spdk_dd.o 00:02:54.381 TEST_HEADER include/spdk/nvme.h 00:02:54.381 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.381 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.381 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.381 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.381 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.381 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.381 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.381 TEST_HEADER include/spdk/nvmf.h 00:02:54.381 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.381 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.381 TEST_HEADER include/spdk/opal.h 00:02:54.381 TEST_HEADER include/spdk/opal_spec.h 00:02:54.381 TEST_HEADER include/spdk/pci_ids.h 00:02:54.381 TEST_HEADER include/spdk/pipe.h 00:02:54.381 TEST_HEADER include/spdk/queue.h 00:02:54.381 TEST_HEADER include/spdk/reduce.h 00:02:54.381 TEST_HEADER include/spdk/rpc.h 00:02:54.381 TEST_HEADER include/spdk/scheduler.h 00:02:54.381 TEST_HEADER include/spdk/scsi.h 00:02:54.381 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.381 TEST_HEADER include/spdk/sock.h 00:02:54.381 TEST_HEADER include/spdk/stdinc.h 00:02:54.381 TEST_HEADER include/spdk/string.h 00:02:54.381 TEST_HEADER include/spdk/thread.h 00:02:54.381 TEST_HEADER include/spdk/trace.h 00:02:54.381 TEST_HEADER include/spdk/trace_parser.h 00:02:54.382 TEST_HEADER include/spdk/tree.h 00:02:54.382 CC app/nvmf_tgt/nvmf_main.o 00:02:54.382 CC app/spdk_tgt/spdk_tgt.o 00:02:54.382 TEST_HEADER include/spdk/ublk.h 00:02:54.382 TEST_HEADER include/spdk/util.h 00:02:54.382 TEST_HEADER include/spdk/uuid.h 00:02:54.382 TEST_HEADER include/spdk/version.h 00:02:54.382 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.382 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.382 TEST_HEADER include/spdk/vhost.h 00:02:54.382 TEST_HEADER include/spdk/vmd.h 00:02:54.382 TEST_HEADER include/spdk/xor.h 00:02:54.382 TEST_HEADER include/spdk/zipf.h 00:02:54.382 CXX test/cpp_headers/accel.o 00:02:54.382 CXX test/cpp_headers/accel_module.o 00:02:54.382 CXX test/cpp_headers/assert.o 00:02:54.382 CXX test/cpp_headers/barrier.o 00:02:54.382 CXX test/cpp_headers/base64.o 00:02:54.382 CXX test/cpp_headers/bdev.o 00:02:54.382 CXX test/cpp_headers/bdev_module.o 00:02:54.382 CXX test/cpp_headers/bdev_zone.o 00:02:54.382 CXX test/cpp_headers/bit_array.o 00:02:54.382 CXX test/cpp_headers/bit_pool.o 00:02:54.382 CXX test/cpp_headers/blob_bdev.o 00:02:54.382 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.382 CXX test/cpp_headers/blobfs.o 00:02:54.382 CXX test/cpp_headers/blob.o 00:02:54.382 CXX test/cpp_headers/config.o 00:02:54.382 CXX test/cpp_headers/conf.o 00:02:54.382 CXX test/cpp_headers/cpuset.o 00:02:54.382 CXX test/cpp_headers/crc16.o 00:02:54.382 CXX test/cpp_headers/crc32.o 00:02:54.382 CXX test/cpp_headers/crc64.o 00:02:54.382 CXX test/cpp_headers/dif.o 00:02:54.382 CXX test/cpp_headers/dma.o 00:02:54.382 CXX test/cpp_headers/endian.o 00:02:54.382 CXX test/cpp_headers/env_dpdk.o 00:02:54.382 CXX test/cpp_headers/env.o 00:02:54.382 CXX test/cpp_headers/event.o 00:02:54.382 CXX test/cpp_headers/fd_group.o 00:02:54.382 CXX test/cpp_headers/fd.o 00:02:54.382 CXX test/cpp_headers/file.o 00:02:54.382 CXX test/cpp_headers/ftl.o 00:02:54.382 CXX test/cpp_headers/gpt_spec.o 00:02:54.382 CXX test/cpp_headers/hexlify.o 00:02:54.382 CXX test/cpp_headers/histogram_data.o 00:02:54.382 CXX test/cpp_headers/idxd.o 00:02:54.382 CXX test/cpp_headers/idxd_spec.o 00:02:54.382 CXX test/cpp_headers/init.o 00:02:54.382 CXX test/cpp_headers/ioat.o 00:02:54.382 CC test/thread/poller_perf/poller_perf.o 00:02:54.382 CXX test/cpp_headers/iscsi_spec.o 00:02:54.382 CXX test/cpp_headers/ioat_spec.o 00:02:54.382 CXX test/cpp_headers/json.o 00:02:54.382 CC test/app/stub/stub.o 00:02:54.382 CXX test/cpp_headers/jsonrpc.o 00:02:54.382 CC examples/ioat/perf/perf.o 00:02:54.382 CC test/app/histogram_perf/histogram_perf.o 00:02:54.382 CC examples/ioat/verify/verify.o 00:02:54.382 CC examples/util/zipf/zipf.o 00:02:54.382 CC test/app/jsoncat/jsoncat.o 00:02:54.382 CC test/thread/lock/spdk_lock.o 00:02:54.382 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.382 CC test/env/pci/pci_ut.o 00:02:54.382 CC test/env/memory/memory_ut.o 00:02:54.382 CC test/env/vtophys/vtophys.o 00:02:54.382 CC app/fio/nvme/fio_plugin.o 00:02:54.645 CXX test/cpp_headers/keyring.o 00:02:54.645 CC test/app/bdev_svc/bdev_svc.o 00:02:54.645 CC test/dma/test_dma/test_dma.o 00:02:54.645 CC app/fio/bdev/fio_plugin.o 00:02:54.645 LINK spdk_lspci 00:02:54.645 CC test/env/mem_callbacks/mem_callbacks.o 00:02:54.645 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:54.645 LINK rpc_client_test 00:02:54.645 LINK spdk_nvme_discover 00:02:54.645 LINK spdk_trace_record 00:02:54.645 LINK interrupt_tgt 00:02:54.645 CXX test/cpp_headers/keyring_module.o 00:02:54.645 LINK jsoncat 00:02:54.645 CXX test/cpp_headers/likely.o 00:02:54.645 CXX test/cpp_headers/log.o 00:02:54.645 CXX test/cpp_headers/lvol.o 00:02:54.645 LINK poller_perf 00:02:54.645 LINK zipf 00:02:54.645 CXX test/cpp_headers/memory.o 00:02:54.645 LINK vtophys 00:02:54.645 CXX test/cpp_headers/mmio.o 00:02:54.645 LINK histogram_perf 00:02:54.645 CXX test/cpp_headers/nbd.o 00:02:54.645 CXX test/cpp_headers/net.o 00:02:54.645 CXX test/cpp_headers/notify.o 00:02:54.645 LINK env_dpdk_post_init 00:02:54.645 CXX test/cpp_headers/nvme.o 00:02:54.645 CXX test/cpp_headers/nvme_intel.o 00:02:54.645 CXX test/cpp_headers/nvme_ocssd.o 00:02:54.645 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:54.645 LINK nvmf_tgt 00:02:54.645 CXX test/cpp_headers/nvme_spec.o 00:02:54.645 CXX test/cpp_headers/nvme_zns.o 00:02:54.645 CXX test/cpp_headers/nvmf_cmd.o 00:02:54.645 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:54.645 CXX test/cpp_headers/nvmf.o 00:02:54.645 CXX test/cpp_headers/nvmf_spec.o 00:02:54.645 CXX test/cpp_headers/nvmf_transport.o 00:02:54.645 CXX test/cpp_headers/opal.o 00:02:54.645 CXX test/cpp_headers/opal_spec.o 00:02:54.645 LINK iscsi_tgt 00:02:54.645 CXX test/cpp_headers/pci_ids.o 00:02:54.645 CXX test/cpp_headers/pipe.o 00:02:54.645 CXX test/cpp_headers/queue.o 00:02:54.645 CXX test/cpp_headers/reduce.o 00:02:54.645 CXX test/cpp_headers/rpc.o 00:02:54.645 CXX test/cpp_headers/scheduler.o 00:02:54.645 CXX test/cpp_headers/scsi.o 00:02:54.645 CXX test/cpp_headers/scsi_spec.o 00:02:54.645 CXX test/cpp_headers/sock.o 00:02:54.645 CXX test/cpp_headers/stdinc.o 00:02:54.645 CXX test/cpp_headers/string.o 00:02:54.645 CXX test/cpp_headers/thread.o 00:02:54.645 LINK stub 00:02:54.645 CXX test/cpp_headers/trace.o 00:02:54.645 CXX test/cpp_headers/trace_parser.o 00:02:54.645 CXX test/cpp_headers/tree.o 00:02:54.645 CXX test/cpp_headers/ublk.o 00:02:54.645 LINK ioat_perf 00:02:54.645 LINK spdk_tgt 00:02:54.645 CXX test/cpp_headers/util.o 00:02:54.904 LINK verify 00:02:54.904 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:54.904 LINK bdev_svc 00:02:54.904 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:54.904 CXX test/cpp_headers/uuid.o 00:02:54.904 LINK spdk_trace 00:02:54.904 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:54.904 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:54.904 CXX test/cpp_headers/version.o 00:02:54.904 CXX test/cpp_headers/vfio_user_pci.o 00:02:54.904 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:54.904 CXX test/cpp_headers/vfio_user_spec.o 00:02:54.904 CXX test/cpp_headers/vhost.o 00:02:54.904 CXX test/cpp_headers/vmd.o 00:02:54.904 CXX test/cpp_headers/xor.o 00:02:54.904 CXX test/cpp_headers/zipf.o 00:02:54.904 LINK test_dma 00:02:54.904 LINK pci_ut 00:02:54.904 LINK spdk_dd 00:02:55.162 LINK nvme_fuzz 00:02:55.162 LINK spdk_nvme_identify 00:02:55.162 LINK spdk_bdev 00:02:55.163 LINK mem_callbacks 00:02:55.163 LINK llvm_vfio_fuzz 00:02:55.163 LINK spdk_nvme 00:02:55.421 LINK spdk_top 00:02:55.421 LINK vhost_fuzz 00:02:55.421 LINK spdk_nvme_perf 00:02:55.421 CC examples/vmd/led/led.o 00:02:55.421 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.421 CC examples/sock/hello_world/hello_sock.o 00:02:55.421 CC examples/idxd/perf/perf.o 00:02:55.421 CC examples/thread/thread/thread_ex.o 00:02:55.421 CC app/vhost/vhost.o 00:02:55.421 LINK lsvmd 00:02:55.421 LINK led 00:02:55.421 LINK memory_ut 00:02:55.680 LINK llvm_nvme_fuzz 00:02:55.680 LINK hello_sock 00:02:55.680 LINK thread 00:02:55.680 LINK idxd_perf 00:02:55.680 LINK vhost 00:02:55.680 LINK spdk_lock 00:02:56.257 LINK iscsi_fuzz 00:02:56.257 CC examples/nvme/arbitration/arbitration.o 00:02:56.257 CC examples/nvme/abort/abort.o 00:02:56.257 CC examples/nvme/reconnect/reconnect.o 00:02:56.257 CC examples/nvme/hotplug/hotplug.o 00:02:56.257 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:56.257 CC examples/nvme/hello_world/hello_world.o 00:02:56.257 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.257 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.257 CC test/event/reactor/reactor.o 00:02:56.257 CC test/event/event_perf/event_perf.o 00:02:56.523 CC test/event/reactor_perf/reactor_perf.o 00:02:56.523 CC test/event/scheduler/scheduler.o 00:02:56.523 CC test/event/app_repeat/app_repeat.o 00:02:56.523 LINK pmr_persistence 00:02:56.523 LINK hotplug 00:02:56.523 LINK cmb_copy 00:02:56.523 LINK event_perf 00:02:56.523 LINK reactor 00:02:56.523 LINK hello_world 00:02:56.523 LINK reactor_perf 00:02:56.523 LINK app_repeat 00:02:56.523 LINK reconnect 00:02:56.523 LINK abort 00:02:56.523 LINK arbitration 00:02:56.523 LINK scheduler 00:02:56.782 LINK nvme_manage 00:02:57.040 CC test/nvme/aer/aer.o 00:02:57.040 CC test/nvme/reserve/reserve.o 00:02:57.040 CC test/nvme/cuse/cuse.o 00:02:57.040 CC test/nvme/reset/reset.o 00:02:57.040 CC test/nvme/e2edp/nvme_dp.o 00:02:57.040 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.040 CC test/nvme/sgl/sgl.o 00:02:57.040 CC test/nvme/compliance/nvme_compliance.o 00:02:57.040 CC test/nvme/simple_copy/simple_copy.o 00:02:57.040 CC test/nvme/startup/startup.o 00:02:57.040 CC test/nvme/fdp/fdp.o 00:02:57.040 CC test/nvme/connect_stress/connect_stress.o 00:02:57.040 CC test/nvme/overhead/overhead.o 00:02:57.040 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.040 CC test/nvme/boot_partition/boot_partition.o 00:02:57.040 CC test/nvme/err_injection/err_injection.o 00:02:57.040 CC test/accel/dif/dif.o 00:02:57.040 CC test/blobfs/mkfs/mkfs.o 00:02:57.040 CC test/lvol/esnap/esnap.o 00:02:57.040 LINK startup 00:02:57.040 LINK connect_stress 00:02:57.040 LINK reserve 00:02:57.040 LINK boot_partition 00:02:57.040 LINK doorbell_aers 00:02:57.040 LINK fused_ordering 00:02:57.040 LINK simple_copy 00:02:57.040 LINK aer 00:02:57.040 LINK nvme_dp 00:02:57.040 LINK reset 00:02:57.040 LINK sgl 00:02:57.299 LINK err_injection 00:02:57.299 LINK fdp 00:02:57.299 LINK mkfs 00:02:57.299 LINK overhead 00:02:57.299 LINK nvme_compliance 00:02:57.299 LINK dif 00:02:57.557 CC examples/accel/perf/accel_perf.o 00:02:57.557 CC examples/blob/cli/blobcli.o 00:02:57.557 CC examples/blob/hello_world/hello_blob.o 00:02:57.816 LINK hello_blob 00:02:57.816 LINK accel_perf 00:02:57.816 LINK cuse 00:02:57.816 LINK blobcli 00:02:58.751 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.751 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.751 LINK hello_bdev 00:02:59.011 CC test/bdev/bdevio/bdevio.o 00:02:59.271 LINK bdevperf 00:02:59.271 LINK bdevio 00:03:00.650 LINK esnap 00:03:00.650 CC examples/nvmf/nvmf/nvmf.o 00:03:00.910 LINK nvmf 00:03:02.289 00:03:02.289 real 0m48.404s 00:03:02.289 user 6m13.349s 00:03:02.289 sys 2m28.352s 00:03:02.289 11:49:39 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:02.289 11:49:39 make -- common/autotest_common.sh@10 -- $ set +x 00:03:02.289 ************************************ 00:03:02.289 END TEST make 00:03:02.289 ************************************ 00:03:02.289 11:49:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.289 11:49:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.289 11:49:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.289 11:49:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.289 11:49:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.289 11:49:39 -- pm/common@44 -- $ pid=789338 00:03:02.289 11:49:39 -- pm/common@50 -- $ kill -TERM 789338 00:03:02.289 11:49:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.289 11:49:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.289 11:49:39 -- pm/common@44 -- $ pid=789340 00:03:02.289 11:49:39 -- pm/common@50 -- $ kill -TERM 789340 00:03:02.289 11:49:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.289 11:49:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:02.289 11:49:39 -- pm/common@44 -- $ pid=789342 00:03:02.289 11:49:39 -- pm/common@50 -- $ kill -TERM 789342 00:03:02.289 11:49:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.289 11:49:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:02.290 11:49:39 -- pm/common@44 -- $ pid=789364 00:03:02.290 11:49:39 -- pm/common@50 -- $ sudo -E kill -TERM 789364 00:03:02.550 11:49:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:02.550 11:49:39 -- nvmf/common.sh@7 -- # uname -s 00:03:02.550 11:49:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.550 11:49:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.550 11:49:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.550 11:49:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.550 11:49:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.550 11:49:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.550 11:49:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.550 11:49:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.550 11:49:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.550 11:49:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.550 11:49:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:03:02.550 11:49:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:03:02.550 11:49:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.550 11:49:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.550 11:49:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:02.550 11:49:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.550 11:49:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:02.550 11:49:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.550 11:49:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.550 11:49:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.550 11:49:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.550 11:49:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.550 11:49:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.550 11:49:39 -- paths/export.sh@5 -- # export PATH 00:03:02.550 11:49:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.550 11:49:39 -- nvmf/common.sh@47 -- # : 0 00:03:02.550 11:49:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:02.550 11:49:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:02.550 11:49:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.550 11:49:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.550 11:49:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.550 11:49:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:02.550 11:49:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:02.550 11:49:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:02.550 11:49:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.550 11:49:39 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.550 11:49:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.551 11:49:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.551 11:49:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:02.551 11:49:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.551 11:49:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:02.551 11:49:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.551 11:49:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.551 11:49:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.551 11:49:39 -- spdk/autotest.sh@48 -- # udevadm_pid=848820 00:03:02.551 11:49:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.551 11:49:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.551 11:49:39 -- pm/common@17 -- # local monitor 00:03:02.551 11:49:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.551 11:49:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.551 11:49:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.551 11:49:39 -- pm/common@21 -- # date +%s 00:03:02.551 11:49:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.551 11:49:39 -- pm/common@21 -- # date +%s 00:03:02.551 11:49:39 -- pm/common@25 -- # sleep 1 00:03:02.551 11:49:39 -- pm/common@21 -- # date +%s 00:03:02.551 11:49:39 -- pm/common@21 -- # date +%s 00:03:02.551 11:49:39 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900979 00:03:02.551 11:49:39 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900979 00:03:02.551 11:49:39 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900979 00:03:02.551 11:49:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900979 00:03:02.551 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900979_collect-vmstat.pm.log 00:03:02.551 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900979_collect-cpu-load.pm.log 00:03:02.551 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900979_collect-cpu-temp.pm.log 00:03:02.551 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900979_collect-bmc-pm.bmc.pm.log 00:03:03.489 11:49:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.489 11:49:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:03.489 11:49:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:03.489 11:49:40 -- common/autotest_common.sh@10 -- # set +x 00:03:03.489 11:49:40 -- spdk/autotest.sh@59 -- # create_test_list 00:03:03.489 11:49:40 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:03.489 11:49:40 -- common/autotest_common.sh@10 -- # set +x 00:03:03.747 11:49:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:03.747 11:49:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.747 11:49:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.747 11:49:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:03.747 11:49:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.747 11:49:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:03.747 11:49:40 -- common/autotest_common.sh@1455 -- # uname 00:03:03.747 11:49:40 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:03.747 11:49:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:03.747 11:49:40 -- common/autotest_common.sh@1475 -- # uname 00:03:03.747 11:49:40 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:03.747 11:49:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:03.747 11:49:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:03.747 11:49:40 -- spdk/autotest.sh@72 -- # hash lcov 00:03:03.747 11:49:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:03.747 11:49:40 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:03.747 11:49:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:03.747 11:49:40 -- common/autotest_common.sh@10 -- # set +x 00:03:03.747 11:49:40 -- spdk/autotest.sh@91 -- # rm -f 00:03:03.747 11:49:40 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.943 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:03:07.943 0000:af:00.0 (8086 2701): Already using the nvme driver 00:03:07.943 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:b0:00.0 (8086 2701): Already using the nvme driver 00:03:07.943 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:07.943 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:07.943 11:49:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:07.943 11:49:44 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:07.943 11:49:44 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:07.943 11:49:44 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:07.943 11:49:44 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:07.943 11:49:44 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:07.943 11:49:44 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:07.943 11:49:44 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.943 11:49:44 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:07.943 11:49:44 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:07.943 11:49:44 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:07.943 11:49:44 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:07.944 11:49:44 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:07.944 11:49:44 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:07.944 11:49:44 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:07.944 11:49:44 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:07.944 11:49:44 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:07.944 11:49:44 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:07.944 11:49:44 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:07.944 11:49:44 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:07.944 11:49:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.944 11:49:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.944 11:49:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:07.944 11:49:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:07.944 11:49:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.944 No valid GPT data, bailing 00:03:07.944 11:49:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.944 11:49:45 -- scripts/common.sh@391 -- # pt= 00:03:07.944 11:49:45 -- scripts/common.sh@392 -- # return 1 00:03:07.944 11:49:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.944 1+0 records in 00:03:07.944 1+0 records out 00:03:07.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00223866 s, 468 MB/s 00:03:07.944 11:49:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.944 11:49:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.944 11:49:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:07.944 11:49:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:07.944 11:49:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:07.944 No valid GPT data, bailing 00:03:07.944 11:49:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:07.944 11:49:45 -- scripts/common.sh@391 -- # pt= 00:03:07.944 11:49:45 -- scripts/common.sh@392 -- # return 1 00:03:07.944 11:49:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:07.944 1+0 records in 00:03:07.944 1+0 records out 00:03:07.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563848 s, 186 MB/s 00:03:07.944 11:49:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.944 11:49:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.944 11:49:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:07.944 11:49:45 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:07.944 11:49:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:07.944 No valid GPT data, bailing 00:03:07.944 11:49:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:07.944 11:49:45 -- scripts/common.sh@391 -- # pt= 00:03:07.944 11:49:45 -- scripts/common.sh@392 -- # return 1 00:03:07.944 11:49:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:07.944 1+0 records in 00:03:07.944 1+0 records out 00:03:07.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428494 s, 245 MB/s 00:03:07.944 11:49:45 -- spdk/autotest.sh@118 -- # sync 00:03:07.944 11:49:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.944 11:49:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.944 11:49:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:13.330 11:49:50 -- spdk/autotest.sh@124 -- # uname -s 00:03:13.330 11:49:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:13.330 11:49:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.330 11:49:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:13.330 11:49:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.330 11:49:50 -- common/autotest_common.sh@10 -- # set +x 00:03:13.330 ************************************ 00:03:13.330 START TEST setup.sh 00:03:13.330 ************************************ 00:03:13.330 11:49:50 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.330 * Looking for test storage... 00:03:13.330 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:13.330 11:49:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:13.330 11:49:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:13.330 11:49:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:13.330 11:49:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:13.330 11:49:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.330 11:49:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.330 ************************************ 00:03:13.330 START TEST acl 00:03:13.330 ************************************ 00:03:13.330 11:49:50 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:13.589 * Looking for test storage... 00:03:13.589 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:13.589 11:49:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:13.589 11:49:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.589 11:49:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:13.589 11:49:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:13.589 11:49:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:13.589 11:49:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:13.589 11:49:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:13.589 11:49:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.589 11:49:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.785 11:49:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:17.785 11:49:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:17.785 11:49:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.785 11:49:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:17.785 11:49:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.785 11:49:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:21.982 Hugepages 00:03:21.982 node hugesize free / total 00:03:21.982 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.982 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.982 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.982 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.982 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 00:03:21.983 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:af:00.0 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:b0:00.0 == *:*:*.* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:03:21.983 11:49:58 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.983 11:49:58 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.983 11:49:58 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.983 11:49:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.983 ************************************ 00:03:21.983 START TEST denied 00:03:21.983 ************************************ 00:03:21.983 11:49:58 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:21.983 11:49:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:21.983 11:49:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:21.983 11:49:58 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:21.983 11:49:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.983 11:49:58 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:26.180 0000:5e:00.0 (144d a80a): Skipping denied controller at 0000:5e:00.0 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.180 11:50:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.456 00:03:31.456 real 0m8.972s 00:03:31.456 user 0m2.815s 00:03:31.456 sys 0m5.411s 00:03:31.456 11:50:07 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.456 11:50:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.456 ************************************ 00:03:31.456 END TEST denied 00:03:31.456 ************************************ 00:03:31.456 11:50:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.456 11:50:07 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.456 11:50:07 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.456 11:50:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.456 ************************************ 00:03:31.456 START TEST allowed 00:03:31.456 ************************************ 00:03:31.456 11:50:07 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:31.456 11:50:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:31.456 11:50:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.456 11:50:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:31.456 11:50:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.456 11:50:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:36.734 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:af:00.0 0000:b0:00.0 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:af:00.0 ]] 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/driver 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:b0:00.0 ]] 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:b0:00.0/driver 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.734 11:50:13 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.932 00:03:40.932 real 0m9.664s 00:03:40.932 user 0m2.729s 00:03:40.932 sys 0m5.392s 00:03:40.932 11:50:17 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.932 11:50:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:40.932 ************************************ 00:03:40.932 END TEST allowed 00:03:40.932 ************************************ 00:03:40.932 00:03:40.932 real 0m27.135s 00:03:40.932 user 0m8.649s 00:03:40.932 sys 0m16.512s 00:03:40.932 11:50:17 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.932 11:50:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.932 ************************************ 00:03:40.932 END TEST acl 00:03:40.932 ************************************ 00:03:40.932 11:50:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.932 11:50:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.932 11:50:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.932 11:50:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.932 ************************************ 00:03:40.932 START TEST hugepages 00:03:40.932 ************************************ 00:03:40.932 11:50:17 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.932 * Looking for test storage... 00:03:40.932 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 38222412 kB' 'MemAvailable: 41737032 kB' 'Buffers: 2704 kB' 'Cached: 15048860 kB' 'SwapCached: 0 kB' 'Active: 12078088 kB' 'Inactive: 3444788 kB' 'Active(anon): 11611056 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474624 kB' 'Mapped: 153884 kB' 'Shmem: 11139744 kB' 'KReclaimable: 199896 kB' 'Slab: 601144 kB' 'SReclaimable: 199896 kB' 'SUnreclaim: 401248 kB' 'KernelStack: 16512 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439172 kB' 'Committed_AS: 12933172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203444 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.932 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.933 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.934 11:50:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:40.934 11:50:17 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.934 11:50:17 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.934 11:50:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.934 ************************************ 00:03:40.934 START TEST default_setup 00:03:40.934 ************************************ 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.934 11:50:18 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:45.132 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:03:45.132 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.132 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:45.132 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40452672 kB' 'MemAvailable: 43966904 kB' 'Buffers: 2704 kB' 'Cached: 15048952 kB' 'SwapCached: 0 kB' 'Active: 12097220 kB' 'Inactive: 3444788 kB' 'Active(anon): 11630188 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493460 kB' 'Mapped: 154428 kB' 'Shmem: 11139836 kB' 'KReclaimable: 199120 kB' 'Slab: 598440 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399320 kB' 'KernelStack: 16528 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12961292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203508 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.132 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.133 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40453876 kB' 'MemAvailable: 43968108 kB' 'Buffers: 2704 kB' 'Cached: 15048956 kB' 'SwapCached: 0 kB' 'Active: 12096888 kB' 'Inactive: 3444788 kB' 'Active(anon): 11629856 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493440 kB' 'Mapped: 154352 kB' 'Shmem: 11139840 kB' 'KReclaimable: 199120 kB' 'Slab: 598432 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399312 kB' 'KernelStack: 16528 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12961308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.134 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.135 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40454652 kB' 'MemAvailable: 43968884 kB' 'Buffers: 2704 kB' 'Cached: 15048976 kB' 'SwapCached: 0 kB' 'Active: 12096928 kB' 'Inactive: 3444788 kB' 'Active(anon): 11629896 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493448 kB' 'Mapped: 154352 kB' 'Shmem: 11139860 kB' 'KReclaimable: 199120 kB' 'Slab: 598432 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399312 kB' 'KernelStack: 16528 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12961332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.136 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.138 nr_hugepages=1024 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.138 resv_hugepages=0 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.138 surplus_hugepages=0 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.138 anon_hugepages=0 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40455036 kB' 'MemAvailable: 43969268 kB' 'Buffers: 2704 kB' 'Cached: 15049016 kB' 'SwapCached: 0 kB' 'Active: 12096608 kB' 'Inactive: 3444788 kB' 'Active(anon): 11629576 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493068 kB' 'Mapped: 154352 kB' 'Shmem: 11139900 kB' 'KReclaimable: 199120 kB' 'Slab: 598432 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399312 kB' 'KernelStack: 16512 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12961352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.139 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 22893484 kB' 'MemUsed: 9740484 kB' 'SwapCached: 0 kB' 'Active: 6034400 kB' 'Inactive: 191968 kB' 'Active(anon): 5754592 kB' 'Inactive(anon): 0 kB' 'Active(file): 279808 kB' 'Inactive(file): 191968 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5991612 kB' 'Mapped: 73604 kB' 'AnonPages: 238056 kB' 'Shmem: 5519836 kB' 'KernelStack: 8728 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90564 kB' 'Slab: 332776 kB' 'SReclaimable: 90564 kB' 'SUnreclaim: 242212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.140 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.141 node0=1024 expecting 1024 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.141 00:03:45.141 real 0m3.948s 00:03:45.141 user 0m1.523s 00:03:45.141 sys 0m2.496s 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.141 11:50:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:45.141 ************************************ 00:03:45.141 END TEST default_setup 00:03:45.141 ************************************ 00:03:45.141 11:50:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:45.141 11:50:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.141 11:50:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.141 11:50:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.141 ************************************ 00:03:45.141 START TEST per_node_1G_alloc 00:03:45.141 ************************************ 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:45.141 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.142 11:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:48.435 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:48.435 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:48.435 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:48.435 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.435 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40492156 kB' 'MemAvailable: 44006388 kB' 'Buffers: 2704 kB' 'Cached: 15049096 kB' 'SwapCached: 0 kB' 'Active: 12092436 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625404 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488280 kB' 'Mapped: 153192 kB' 'Shmem: 11139980 kB' 'KReclaimable: 199120 kB' 'Slab: 598640 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399520 kB' 'KernelStack: 16336 kB' 'PageTables: 7488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203332 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.435 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.436 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40492328 kB' 'MemAvailable: 44006560 kB' 'Buffers: 2704 kB' 'Cached: 15049100 kB' 'SwapCached: 0 kB' 'Active: 12092044 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625012 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488304 kB' 'Mapped: 153060 kB' 'Shmem: 11139984 kB' 'KReclaimable: 199120 kB' 'Slab: 598560 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399440 kB' 'KernelStack: 16336 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203300 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.437 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.438 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40492656 kB' 'MemAvailable: 44006888 kB' 'Buffers: 2704 kB' 'Cached: 15049120 kB' 'SwapCached: 0 kB' 'Active: 12091764 kB' 'Inactive: 3444788 kB' 'Active(anon): 11624732 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487992 kB' 'Mapped: 153060 kB' 'Shmem: 11140004 kB' 'KReclaimable: 199120 kB' 'Slab: 598560 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399440 kB' 'KernelStack: 16336 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203300 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.439 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.703 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.704 nr_hugepages=1024 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.704 resv_hugepages=0 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.704 surplus_hugepages=0 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.704 anon_hugepages=0 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.704 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40492524 kB' 'MemAvailable: 44006756 kB' 'Buffers: 2704 kB' 'Cached: 15049160 kB' 'SwapCached: 0 kB' 'Active: 12091460 kB' 'Inactive: 3444788 kB' 'Active(anon): 11624428 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487608 kB' 'Mapped: 153060 kB' 'Shmem: 11140044 kB' 'KReclaimable: 199120 kB' 'Slab: 598560 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399440 kB' 'KernelStack: 16320 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203300 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.705 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.706 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.707 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23945364 kB' 'MemUsed: 8688604 kB' 'SwapCached: 0 kB' 'Active: 6032800 kB' 'Inactive: 191968 kB' 'Active(anon): 5752992 kB' 'Inactive(anon): 0 kB' 'Active(file): 279808 kB' 'Inactive(file): 191968 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5991716 kB' 'Mapped: 72620 kB' 'AnonPages: 236320 kB' 'Shmem: 5519940 kB' 'KernelStack: 8776 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90564 kB' 'Slab: 332836 kB' 'SReclaimable: 90564 kB' 'SUnreclaim: 242272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.708 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661476 kB' 'MemFree: 16547160 kB' 'MemUsed: 11114316 kB' 'SwapCached: 0 kB' 'Active: 6058684 kB' 'Inactive: 3252820 kB' 'Active(anon): 5871460 kB' 'Inactive(anon): 0 kB' 'Active(file): 187224 kB' 'Inactive(file): 3252820 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9060152 kB' 'Mapped: 80440 kB' 'AnonPages: 251360 kB' 'Shmem: 5620108 kB' 'KernelStack: 7560 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108556 kB' 'Slab: 265724 kB' 'SReclaimable: 108556 kB' 'SUnreclaim: 157168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.711 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.712 node0=512 expecting 512 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.712 node1=512 expecting 512 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.712 00:03:48.712 real 0m3.805s 00:03:48.712 user 0m1.408s 00:03:48.712 sys 0m2.465s 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.712 11:50:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.712 ************************************ 00:03:48.712 END TEST per_node_1G_alloc 00:03:48.712 ************************************ 00:03:48.712 11:50:25 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:48.712 11:50:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.712 11:50:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.712 11:50:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.712 ************************************ 00:03:48.712 START TEST even_2G_alloc 00:03:48.712 ************************************ 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.712 11:50:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:52.942 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:52.943 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:52.943 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:52.943 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.943 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40501684 kB' 'MemAvailable: 44015916 kB' 'Buffers: 2704 kB' 'Cached: 15049252 kB' 'SwapCached: 0 kB' 'Active: 12092712 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625680 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488220 kB' 'Mapped: 153120 kB' 'Shmem: 11140136 kB' 'KReclaimable: 199120 kB' 'Slab: 598524 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399404 kB' 'KernelStack: 16384 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.943 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40503984 kB' 'MemAvailable: 44018216 kB' 'Buffers: 2704 kB' 'Cached: 15049256 kB' 'SwapCached: 0 kB' 'Active: 12092356 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625324 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488384 kB' 'Mapped: 153068 kB' 'Shmem: 11140140 kB' 'KReclaimable: 199120 kB' 'Slab: 598516 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399396 kB' 'KernelStack: 16368 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203444 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.944 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.945 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40503984 kB' 'MemAvailable: 44018216 kB' 'Buffers: 2704 kB' 'Cached: 15049256 kB' 'SwapCached: 0 kB' 'Active: 12092392 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625360 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488420 kB' 'Mapped: 153068 kB' 'Shmem: 11140140 kB' 'KReclaimable: 199120 kB' 'Slab: 598516 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399396 kB' 'KernelStack: 16384 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.947 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.948 nr_hugepages=1024 00:03:52.948 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.948 resv_hugepages=0 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.949 surplus_hugepages=0 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.949 anon_hugepages=0 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40504264 kB' 'MemAvailable: 44018496 kB' 'Buffers: 2704 kB' 'Cached: 15049312 kB' 'SwapCached: 0 kB' 'Active: 12092044 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625012 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487964 kB' 'Mapped: 153068 kB' 'Shmem: 11140196 kB' 'KReclaimable: 199120 kB' 'Slab: 598516 kB' 'SReclaimable: 199120 kB' 'SUnreclaim: 399396 kB' 'KernelStack: 16352 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12937628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.949 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.950 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23942764 kB' 'MemUsed: 8691204 kB' 'SwapCached: 0 kB' 'Active: 6032056 kB' 'Inactive: 191968 kB' 'Active(anon): 5752248 kB' 'Inactive(anon): 0 kB' 'Active(file): 279808 kB' 'Inactive(file): 191968 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5991856 kB' 'Mapped: 72620 kB' 'AnonPages: 235316 kB' 'Shmem: 5520080 kB' 'KernelStack: 8744 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90564 kB' 'Slab: 332908 kB' 'SReclaimable: 90564 kB' 'SUnreclaim: 242344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.951 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661476 kB' 'MemFree: 16562008 kB' 'MemUsed: 11099468 kB' 'SwapCached: 0 kB' 'Active: 6061048 kB' 'Inactive: 3252820 kB' 'Active(anon): 5873824 kB' 'Inactive(anon): 0 kB' 'Active(file): 187224 kB' 'Inactive(file): 3252820 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9060180 kB' 'Mapped: 80448 kB' 'AnonPages: 253768 kB' 'Shmem: 5620136 kB' 'KernelStack: 7624 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108556 kB' 'Slab: 265608 kB' 'SReclaimable: 108556 kB' 'SUnreclaim: 157052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.952 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.953 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.954 node0=512 expecting 512 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.954 node1=512 expecting 512 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.954 00:03:52.954 real 0m3.804s 00:03:52.954 user 0m1.435s 00:03:52.954 sys 0m2.438s 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.954 11:50:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.954 ************************************ 00:03:52.954 END TEST even_2G_alloc 00:03:52.954 ************************************ 00:03:52.954 11:50:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.954 11:50:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.954 11:50:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.954 11:50:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.954 ************************************ 00:03:52.954 START TEST odd_alloc 00:03:52.954 ************************************ 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.954 11:50:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:56.265 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:56.265 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:56.265 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:56.265 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.265 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40491108 kB' 'MemAvailable: 44005328 kB' 'Buffers: 2704 kB' 'Cached: 15049408 kB' 'SwapCached: 0 kB' 'Active: 12093040 kB' 'Inactive: 3444788 kB' 'Active(anon): 11626008 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489004 kB' 'Mapped: 153172 kB' 'Shmem: 11140292 kB' 'KReclaimable: 199096 kB' 'Slab: 598468 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399372 kB' 'KernelStack: 16496 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486724 kB' 'Committed_AS: 12940792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203540 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40490492 kB' 'MemAvailable: 44004712 kB' 'Buffers: 2704 kB' 'Cached: 15049408 kB' 'SwapCached: 0 kB' 'Active: 12092672 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625640 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488604 kB' 'Mapped: 153092 kB' 'Shmem: 11140292 kB' 'KReclaimable: 199096 kB' 'Slab: 598476 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399380 kB' 'KernelStack: 16400 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486724 kB' 'Committed_AS: 12940808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203508 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.267 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.268 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.269 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.270 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.271 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.272 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.273 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40490756 kB' 'MemAvailable: 44004976 kB' 'Buffers: 2704 kB' 'Cached: 15049428 kB' 'SwapCached: 0 kB' 'Active: 12092692 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625660 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488588 kB' 'Mapped: 153092 kB' 'Shmem: 11140312 kB' 'KReclaimable: 199096 kB' 'Slab: 598476 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399380 kB' 'KernelStack: 16448 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486724 kB' 'Committed_AS: 12940828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203572 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.274 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.275 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.276 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.277 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.277 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.277 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.277 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.278 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.279 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:56.280 nr_hugepages=1025 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.280 resv_hugepages=0 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.280 surplus_hugepages=0 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.280 anon_hugepages=0 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40490424 kB' 'MemAvailable: 44004644 kB' 'Buffers: 2704 kB' 'Cached: 15049428 kB' 'SwapCached: 0 kB' 'Active: 12092680 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625648 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488576 kB' 'Mapped: 153092 kB' 'Shmem: 11140312 kB' 'KReclaimable: 199096 kB' 'Slab: 598476 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399380 kB' 'KernelStack: 16496 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486724 kB' 'Committed_AS: 12940852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203572 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.280 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.281 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.283 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23925936 kB' 'MemUsed: 8708032 kB' 'SwapCached: 0 kB' 'Active: 6031252 kB' 'Inactive: 191968 kB' 'Active(anon): 5751444 kB' 'Inactive(anon): 0 kB' 'Active(file): 279808 kB' 'Inactive(file): 191968 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5991920 kB' 'Mapped: 72628 kB' 'AnonPages: 234540 kB' 'Shmem: 5520144 kB' 'KernelStack: 8712 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90556 kB' 'Slab: 332728 kB' 'SReclaimable: 90556 kB' 'SUnreclaim: 242172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661476 kB' 'MemFree: 16564548 kB' 'MemUsed: 11096928 kB' 'SwapCached: 0 kB' 'Active: 6060808 kB' 'Inactive: 3252820 kB' 'Active(anon): 5873584 kB' 'Inactive(anon): 0 kB' 'Active(file): 187224 kB' 'Inactive(file): 3252820 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9060216 kB' 'Mapped: 80464 kB' 'AnonPages: 253460 kB' 'Shmem: 5620172 kB' 'KernelStack: 7560 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108540 kB' 'Slab: 265748 kB' 'SReclaimable: 108540 kB' 'SUnreclaim: 157208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.548 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.549 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:56.550 node0=512 expecting 513 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:56.550 node1=513 expecting 512 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:56.550 00:03:56.550 real 0m3.778s 00:03:56.550 user 0m1.390s 00:03:56.550 sys 0m2.456s 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.550 11:50:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.550 ************************************ 00:03:56.550 END TEST odd_alloc 00:03:56.550 ************************************ 00:03:56.550 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:56.550 11:50:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.550 11:50:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.550 11:50:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.550 ************************************ 00:03:56.550 START TEST custom_alloc 00:03:56.550 ************************************ 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.550 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.551 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.551 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:56.551 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:56.551 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:56.551 11:50:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:56.551 11:50:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.551 11:50:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:59.842 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.842 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:59.842 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:59.842 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.842 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.105 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.105 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.105 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.105 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.105 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 39446104 kB' 'MemAvailable: 42960324 kB' 'Buffers: 2704 kB' 'Cached: 15049560 kB' 'SwapCached: 0 kB' 'Active: 12093880 kB' 'Inactive: 3444788 kB' 'Active(anon): 11626848 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489144 kB' 'Mapped: 153176 kB' 'Shmem: 11140444 kB' 'KReclaimable: 199096 kB' 'Slab: 598356 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399260 kB' 'KernelStack: 16432 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963460 kB' 'Committed_AS: 12938656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203492 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.106 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.107 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 39447424 kB' 'MemAvailable: 42961644 kB' 'Buffers: 2704 kB' 'Cached: 15049560 kB' 'SwapCached: 0 kB' 'Active: 12092844 kB' 'Inactive: 3444788 kB' 'Active(anon): 11625812 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488568 kB' 'Mapped: 153092 kB' 'Shmem: 11140444 kB' 'KReclaimable: 199096 kB' 'Slab: 598308 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399212 kB' 'KernelStack: 16416 kB' 'PageTables: 7408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963460 kB' 'Committed_AS: 12938672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.108 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.109 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 39446656 kB' 'MemAvailable: 42960876 kB' 'Buffers: 2704 kB' 'Cached: 15049580 kB' 'SwapCached: 0 kB' 'Active: 12094392 kB' 'Inactive: 3444788 kB' 'Active(anon): 11627360 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490132 kB' 'Mapped: 153596 kB' 'Shmem: 11140464 kB' 'KReclaimable: 199096 kB' 'Slab: 598308 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399212 kB' 'KernelStack: 16368 kB' 'PageTables: 7268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963460 kB' 'Committed_AS: 12941704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203444 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.110 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.111 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:00.112 nr_hugepages=1536 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.112 resv_hugepages=0 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.112 surplus_hugepages=0 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.112 anon_hugepages=0 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 39442652 kB' 'MemAvailable: 42956872 kB' 'Buffers: 2704 kB' 'Cached: 15049600 kB' 'SwapCached: 0 kB' 'Active: 12099336 kB' 'Inactive: 3444788 kB' 'Active(anon): 11632304 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495076 kB' 'Mapped: 153596 kB' 'Shmem: 11140484 kB' 'KReclaimable: 199096 kB' 'Slab: 598308 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 399212 kB' 'KernelStack: 16368 kB' 'PageTables: 7268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963460 kB' 'Committed_AS: 12945936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203448 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.112 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.113 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.374 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.375 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 23935088 kB' 'MemUsed: 8698880 kB' 'SwapCached: 0 kB' 'Active: 6031360 kB' 'Inactive: 191968 kB' 'Active(anon): 5751552 kB' 'Inactive(anon): 0 kB' 'Active(file): 279808 kB' 'Inactive(file): 191968 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5992072 kB' 'Mapped: 72748 kB' 'AnonPages: 234444 kB' 'Shmem: 5520296 kB' 'KernelStack: 8728 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90556 kB' 'Slab: 332620 kB' 'SReclaimable: 90556 kB' 'SUnreclaim: 242064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.376 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27661476 kB' 'MemFree: 15510400 kB' 'MemUsed: 12151076 kB' 'SwapCached: 0 kB' 'Active: 6061524 kB' 'Inactive: 3252820 kB' 'Active(anon): 5874300 kB' 'Inactive(anon): 0 kB' 'Active(file): 187224 kB' 'Inactive(file): 3252820 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9060252 kB' 'Mapped: 80464 kB' 'AnonPages: 254124 kB' 'Shmem: 5620208 kB' 'KernelStack: 7736 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108540 kB' 'Slab: 265688 kB' 'SReclaimable: 108540 kB' 'SUnreclaim: 157148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.378 node0=512 expecting 512 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:00.378 node1=1024 expecting 1024 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:00.378 00:04:00.378 real 0m3.795s 00:04:00.378 user 0m1.448s 00:04:00.378 sys 0m2.414s 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.378 11:50:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.378 ************************************ 00:04:00.378 END TEST custom_alloc 00:04:00.378 ************************************ 00:04:00.378 11:50:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:00.378 11:50:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.378 11:50:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.378 11:50:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.378 ************************************ 00:04:00.378 START TEST no_shrink_alloc 00:04:00.378 ************************************ 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:00.378 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.379 11:50:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:03.673 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.673 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.673 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:04:03.673 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.673 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.673 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.673 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.673 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:04:03.936 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.936 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.936 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40503220 kB' 'MemAvailable: 44017464 kB' 'Buffers: 2704 kB' 'Cached: 15049712 kB' 'SwapCached: 0 kB' 'Active: 12091096 kB' 'Inactive: 3444788 kB' 'Active(anon): 11624064 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486272 kB' 'Mapped: 153236 kB' 'Shmem: 11140596 kB' 'KReclaimable: 199144 kB' 'Slab: 598668 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399524 kB' 'KernelStack: 16416 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12939488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203508 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.937 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40503272 kB' 'MemAvailable: 44017516 kB' 'Buffers: 2704 kB' 'Cached: 15049716 kB' 'SwapCached: 0 kB' 'Active: 12090880 kB' 'Inactive: 3444788 kB' 'Active(anon): 11623848 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486556 kB' 'Mapped: 153112 kB' 'Shmem: 11140600 kB' 'KReclaimable: 199144 kB' 'Slab: 598652 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399508 kB' 'KernelStack: 16432 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12939504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203492 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.938 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.939 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40506224 kB' 'MemAvailable: 44020468 kB' 'Buffers: 2704 kB' 'Cached: 15049736 kB' 'SwapCached: 0 kB' 'Active: 12090908 kB' 'Inactive: 3444788 kB' 'Active(anon): 11623876 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486568 kB' 'Mapped: 153112 kB' 'Shmem: 11140620 kB' 'KReclaimable: 199144 kB' 'Slab: 598644 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399500 kB' 'KernelStack: 16448 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12939660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.940 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.941 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.203 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.204 nr_hugepages=1024 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.204 resv_hugepages=0 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.204 surplus_hugepages=0 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.204 anon_hugepages=0 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40507264 kB' 'MemAvailable: 44021508 kB' 'Buffers: 2704 kB' 'Cached: 15049776 kB' 'SwapCached: 0 kB' 'Active: 12090876 kB' 'Inactive: 3444788 kB' 'Active(anon): 11623844 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486460 kB' 'Mapped: 153112 kB' 'Shmem: 11140660 kB' 'KReclaimable: 199144 kB' 'Slab: 598644 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399500 kB' 'KernelStack: 16400 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12939680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.204 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.205 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 22912420 kB' 'MemUsed: 9721548 kB' 'SwapCached: 0 kB' 'Active: 6029844 kB' 'Inactive: 191968 kB' 'Active(anon): 5750036 kB' 'Inactive(anon): 0 kB' 'Active(file): 279808 kB' 'Inactive(file): 191968 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5992212 kB' 'Mapped: 72620 kB' 'AnonPages: 232764 kB' 'Shmem: 5520436 kB' 'KernelStack: 8728 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90620 kB' 'Slab: 332804 kB' 'SReclaimable: 90620 kB' 'SUnreclaim: 242184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.206 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.207 node0=1024 expecting 1024 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.207 11:50:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:07.500 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:07.500 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:04:07.500 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:04:07.500 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.500 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.765 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.765 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.765 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40501448 kB' 'MemAvailable: 44015692 kB' 'Buffers: 2704 kB' 'Cached: 15049844 kB' 'SwapCached: 0 kB' 'Active: 12093460 kB' 'Inactive: 3444788 kB' 'Active(anon): 11626428 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489436 kB' 'Mapped: 153696 kB' 'Shmem: 11140728 kB' 'KReclaimable: 199144 kB' 'Slab: 598840 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399696 kB' 'KernelStack: 16432 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12942704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203556 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.765 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40501084 kB' 'MemAvailable: 44015328 kB' 'Buffers: 2704 kB' 'Cached: 15049848 kB' 'SwapCached: 0 kB' 'Active: 12097396 kB' 'Inactive: 3444788 kB' 'Active(anon): 11630364 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492916 kB' 'Mapped: 153960 kB' 'Shmem: 11140732 kB' 'KReclaimable: 199144 kB' 'Slab: 598792 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399648 kB' 'KernelStack: 16448 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12946432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203528 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40506452 kB' 'MemAvailable: 44020696 kB' 'Buffers: 2704 kB' 'Cached: 15049864 kB' 'SwapCached: 0 kB' 'Active: 12091660 kB' 'Inactive: 3444788 kB' 'Active(anon): 11624628 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487180 kB' 'Mapped: 153116 kB' 'Shmem: 11140748 kB' 'KReclaimable: 199144 kB' 'Slab: 598792 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399648 kB' 'KernelStack: 16448 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12940332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203524 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.771 nr_hugepages=1024 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.771 resv_hugepages=0 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.771 surplus_hugepages=0 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.771 anon_hugepages=0 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.771 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295444 kB' 'MemFree: 40506956 kB' 'MemAvailable: 44021200 kB' 'Buffers: 2704 kB' 'Cached: 15049864 kB' 'SwapCached: 0 kB' 'Active: 12091732 kB' 'Inactive: 3444788 kB' 'Active(anon): 11624700 kB' 'Inactive(anon): 0 kB' 'Active(file): 467032 kB' 'Inactive(file): 3444788 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487280 kB' 'Mapped: 153116 kB' 'Shmem: 11140748 kB' 'KReclaimable: 199144 kB' 'Slab: 598792 kB' 'SReclaimable: 199144 kB' 'SUnreclaim: 399648 kB' 'KernelStack: 16464 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487748 kB' 'Committed_AS: 12940356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203524 kB' 'VmallocChunk: 0 kB' 'Percpu: 50240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1678656 kB' 'DirectMap2M: 18968576 kB' 'DirectMap1G: 48234496 kB' 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.772 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.773 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32633968 kB' 'MemFree: 22910420 kB' 'MemUsed: 9723548 kB' 'SwapCached: 0 kB' 'Active: 6031776 kB' 'Inactive: 191968 kB' 'Active(anon): 5751968 kB' 'Inactive(anon): 0 kB' 'Active(file): 279808 kB' 'Inactive(file): 191968 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5992280 kB' 'Mapped: 72620 kB' 'AnonPages: 234700 kB' 'Shmem: 5520504 kB' 'KernelStack: 8824 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90620 kB' 'Slab: 332944 kB' 'SReclaimable: 90620 kB' 'SUnreclaim: 242324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.774 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.775 node0=1024 expecting 1024 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.775 00:04:07.775 real 0m7.484s 00:04:07.775 user 0m2.848s 00:04:07.775 sys 0m4.777s 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.775 11:50:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.775 ************************************ 00:04:07.775 END TEST no_shrink_alloc 00:04:07.775 ************************************ 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.035 11:50:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.035 00:04:08.035 real 0m27.297s 00:04:08.035 user 0m10.305s 00:04:08.035 sys 0m17.528s 00:04:08.035 11:50:45 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.035 11:50:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.035 ************************************ 00:04:08.035 END TEST hugepages 00:04:08.035 ************************************ 00:04:08.035 11:50:45 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:08.035 11:50:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.035 11:50:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.035 11:50:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.035 ************************************ 00:04:08.035 START TEST driver 00:04:08.035 ************************************ 00:04:08.035 11:50:45 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:08.035 * Looking for test storage... 00:04:08.035 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:08.036 11:50:45 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:08.036 11:50:45 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.036 11:50:45 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.311 11:50:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.311 11:50:50 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.311 11:50:50 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.311 11:50:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.311 ************************************ 00:04:13.311 START TEST guess_driver 00:04:13.311 ************************************ 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 167 > 0 )) 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.311 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.312 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.312 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.312 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.312 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.312 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.312 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.312 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.312 Looking for driver=vfio-pci 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.312 11:50:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.509 11:50:54 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.849 00:04:22.849 real 0m8.901s 00:04:22.849 user 0m2.820s 00:04:22.849 sys 0m5.394s 00:04:22.849 11:50:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.849 11:50:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.849 ************************************ 00:04:22.849 END TEST guess_driver 00:04:22.849 ************************************ 00:04:22.849 00:04:22.849 real 0m14.234s 00:04:22.850 user 0m4.353s 00:04:22.850 sys 0m8.420s 00:04:22.850 11:50:59 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.850 11:50:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.850 ************************************ 00:04:22.850 END TEST driver 00:04:22.850 ************************************ 00:04:22.850 11:50:59 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:22.850 11:50:59 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.850 11:50:59 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.850 11:50:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.850 ************************************ 00:04:22.850 START TEST devices 00:04:22.850 ************************************ 00:04:22.850 11:50:59 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:22.850 * Looking for test storage... 00:04:22.850 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:22.850 11:50:59 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.850 11:50:59 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:22.850 11:50:59 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.850 11:50:59 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:27.046 No valid GPT data, bailing 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:af:00.0 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:27.046 No valid GPT data, bailing 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:af:00.0 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:b0:00.0 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:04:27.046 No valid GPT data, bailing 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:27.046 11:51:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:27.046 11:51:03 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:b0:00.0 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:27.046 11:51:03 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.046 11:51:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.046 ************************************ 00:04:27.046 START TEST nvme_mount 00:04:27.046 ************************************ 00:04:27.046 11:51:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:27.046 11:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:27.046 11:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:27.046 11:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.046 11:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.047 11:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:28.082 Creating new GPT entries in memory. 00:04:28.082 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.082 other utilities. 00:04:28.082 11:51:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.082 11:51:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.082 11:51:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.082 11:51:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.082 11:51:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:29.018 Creating new GPT entries in memory. 00:04:29.018 The operation has completed successfully. 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 876429 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:29.018 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.019 11:51:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.307 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.308 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.567 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.567 11:51:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.827 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:32.827 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:32.827 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.827 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.827 11:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.120 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.380 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:39.673 11:51:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.673 11:51:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.673 11:51:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.673 11:51:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.673 11:51:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.673 11:51:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:39.933 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.193 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.193 00:04:40.193 real 0m13.298s 00:04:40.193 user 0m3.975s 00:04:40.193 sys 0m7.344s 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.193 11:51:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:40.193 ************************************ 00:04:40.193 END TEST nvme_mount 00:04:40.193 ************************************ 00:04:40.193 11:51:17 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.193 11:51:17 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.193 11:51:17 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.193 11:51:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.193 ************************************ 00:04:40.193 START TEST dm_mount 00:04:40.193 ************************************ 00:04:40.193 11:51:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:40.193 11:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.193 11:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.193 11:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.193 11:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.193 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.193 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.194 11:51:17 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.132 Creating new GPT entries in memory. 00:04:41.132 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.132 other utilities. 00:04:41.133 11:51:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.133 11:51:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.133 11:51:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.133 11:51:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.133 11:51:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.527 Creating new GPT entries in memory. 00:04:42.527 The operation has completed successfully. 00:04:42.527 11:51:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.527 11:51:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.527 11:51:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.527 11:51:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.527 11:51:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:43.467 The operation has completed successfully. 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 880939 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:43.467 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.468 11:51:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.760 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.761 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.021 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.313 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.574 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.574 00:04:50.574 real 0m10.459s 00:04:50.574 user 0m2.767s 00:04:50.574 sys 0m4.842s 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.574 11:51:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.574 ************************************ 00:04:50.574 END TEST dm_mount 00:04:50.574 ************************************ 00:04:50.834 11:51:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:50.835 11:51:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.835 11:51:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.835 11:51:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.835 11:51:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.835 11:51:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.835 11:51:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.095 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:51.095 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:51.095 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.095 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.095 11:51:28 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.095 11:51:28 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.095 11:51:28 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.095 11:51:28 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.095 11:51:28 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.095 11:51:28 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.095 11:51:28 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.095 00:04:51.095 real 0m28.704s 00:04:51.095 user 0m8.430s 00:04:51.095 sys 0m15.386s 00:04:51.095 11:51:28 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.095 11:51:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.095 ************************************ 00:04:51.095 END TEST devices 00:04:51.095 ************************************ 00:04:51.095 00:04:51.095 real 1m37.834s 00:04:51.095 user 0m31.908s 00:04:51.095 sys 0m58.180s 00:04:51.095 11:51:28 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.095 11:51:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.095 ************************************ 00:04:51.095 END TEST setup.sh 00:04:51.095 ************************************ 00:04:51.095 11:51:28 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:55.367 Hugepages 00:04:55.367 node hugesize free / total 00:04:55.367 node0 1048576kB 0 / 0 00:04:55.367 node0 2048kB 2048 / 2048 00:04:55.367 node1 1048576kB 0 / 0 00:04:55.367 node1 2048kB 0 / 0 00:04:55.367 00:04:55.367 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.367 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:55.367 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:55.367 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:55.367 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:55.367 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:55.367 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:55.367 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:55.367 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:55.367 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:55.367 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:55.367 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:55.367 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:55.367 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:55.367 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:55.367 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:55.367 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:55.367 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:55.367 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:04:55.367 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:04:55.367 11:51:32 -- spdk/autotest.sh@130 -- # uname -s 00:04:55.367 11:51:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:55.367 11:51:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:55.367 11:51:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:58.661 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:58.661 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.661 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:05:00.570 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:05:00.570 11:51:37 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:01.508 11:51:38 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:01.508 11:51:38 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:01.508 11:51:38 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.508 11:51:38 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:01.508 11:51:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:01.508 11:51:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:01.508 11:51:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.508 11:51:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.508 11:51:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:01.766 11:51:38 -- common/autotest_common.sh@1515 -- # (( 3 == 0 )) 00:05:01.766 11:51:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:05:01.766 11:51:38 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.055 Waiting for block devices as requested 00:05:05.055 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:05:05.314 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:05:05.314 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:05.574 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:05.574 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:05.574 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.833 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:05.833 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:05.833 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:06.092 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:06.092 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:05:06.092 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:06.350 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:06.350 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:06.350 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:06.609 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:06.609 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:06.609 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:06.868 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:06.868 11:51:43 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:06.868 11:51:43 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:06.868 11:51:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:06.868 11:51:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:06.868 11:51:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:06.868 11:51:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1557 -- # continue 00:05:06.868 11:51:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:06.868 11:51:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # grep 0000:af:00.0/nvme/nvme 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:05:06.868 11:51:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:05:06.868 11:51:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:06.868 11:51:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:06.868 11:51:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x7' 00:05:06.868 11:51:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:06.868 11:51:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # grep 0000:b0:00.0/nvme/nvme 00:05:06.868 11:51:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:06.868 11:51:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x7' 00:05:06.868 11:51:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:06.868 11:51:44 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:06.868 11:51:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:06.868 11:51:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.868 11:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:06.868 11:51:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:06.868 11:51:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.868 11:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:06.868 11:51:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:11.064 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:05:11.064 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.064 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:05:11.064 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:05:11.064 11:51:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:11.064 11:51:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.064 11:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 11:51:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:11.064 11:51:47 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:11.064 11:51:48 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.064 11:51:48 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:11.064 11:51:48 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:11.064 11:51:48 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:11.064 11:51:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:11.064 11:51:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:11.064 11:51:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.064 11:51:48 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.064 11:51:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:11.064 11:51:48 -- common/autotest_common.sh@1515 -- # (( 3 == 0 )) 00:05:11.064 11:51:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:05:11.064 11:51:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:11.064 11:51:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:11.064 11:51:48 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:11.064 11:51:48 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:11.064 11:51:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:11.064 11:51:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:af:00.0/device 00:05:11.064 11:51:48 -- common/autotest_common.sh@1580 -- # device=0x2701 00:05:11.064 11:51:48 -- common/autotest_common.sh@1581 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:05:11.064 11:51:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:11.064 11:51:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device 00:05:11.064 11:51:48 -- common/autotest_common.sh@1580 -- # device=0x2701 00:05:11.064 11:51:48 -- common/autotest_common.sh@1581 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:05:11.064 11:51:48 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:11.064 11:51:48 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:11.064 11:51:48 -- common/autotest_common.sh@1593 -- # return 0 00:05:11.064 11:51:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:11.064 11:51:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:11.064 11:51:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:11.064 11:51:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:11.064 11:51:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:11.064 11:51:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.064 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 11:51:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:11.064 11:51:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:11.064 11:51:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.064 11:51:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.064 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 ************************************ 00:05:11.064 START TEST env 00:05:11.064 ************************************ 00:05:11.064 11:51:48 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:11.064 * Looking for test storage... 00:05:11.064 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:11.064 11:51:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:11.064 11:51:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.064 11:51:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.064 11:51:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 ************************************ 00:05:11.324 START TEST env_memory 00:05:11.324 ************************************ 00:05:11.324 11:51:48 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:11.324 00:05:11.324 00:05:11.324 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.324 http://cunit.sourceforge.net/ 00:05:11.324 00:05:11.324 00:05:11.324 Suite: memory 00:05:11.324 Test: alloc and free memory map ...[2024-07-25 11:51:48.405443] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:11.324 passed 00:05:11.324 Test: mem map translation ...[2024-07-25 11:51:48.418613] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:11.324 [2024-07-25 11:51:48.418636] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:11.324 [2024-07-25 11:51:48.418666] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:11.324 [2024-07-25 11:51:48.418675] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:11.324 passed 00:05:11.324 Test: mem map registration ...[2024-07-25 11:51:48.439729] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:11.324 [2024-07-25 11:51:48.439750] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:11.324 passed 00:05:11.324 Test: mem map adjacent registrations ...passed 00:05:11.324 00:05:11.324 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.324 suites 1 1 n/a 0 0 00:05:11.324 tests 4 4 4 0 0 00:05:11.324 asserts 152 152 152 0 n/a 00:05:11.324 00:05:11.325 Elapsed time = 0.076 seconds 00:05:11.325 00:05:11.325 real 0m0.090s 00:05:11.325 user 0m0.072s 00:05:11.325 sys 0m0.017s 00:05:11.325 11:51:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.325 11:51:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:11.325 ************************************ 00:05:11.325 END TEST env_memory 00:05:11.325 ************************************ 00:05:11.325 11:51:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.325 11:51:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.325 11:51:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.325 11:51:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.325 ************************************ 00:05:11.325 START TEST env_vtophys 00:05:11.325 ************************************ 00:05:11.325 11:51:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.325 EAL: lib.eal log level changed from notice to debug 00:05:11.325 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.325 EAL: Detected lcore 1 as core 1 on socket 0 00:05:11.325 EAL: Detected lcore 2 as core 2 on socket 0 00:05:11.325 EAL: Detected lcore 3 as core 3 on socket 0 00:05:11.325 EAL: Detected lcore 4 as core 4 on socket 0 00:05:11.325 EAL: Detected lcore 5 as core 8 on socket 0 00:05:11.325 EAL: Detected lcore 6 as core 9 on socket 0 00:05:11.325 EAL: Detected lcore 7 as core 10 on socket 0 00:05:11.325 EAL: Detected lcore 8 as core 11 on socket 0 00:05:11.325 EAL: Detected lcore 9 as core 16 on socket 0 00:05:11.325 EAL: Detected lcore 10 as core 17 on socket 0 00:05:11.325 EAL: Detected lcore 11 as core 18 on socket 0 00:05:11.325 EAL: Detected lcore 12 as core 19 on socket 0 00:05:11.325 EAL: Detected lcore 13 as core 20 on socket 0 00:05:11.325 EAL: Detected lcore 14 as core 24 on socket 0 00:05:11.325 EAL: Detected lcore 15 as core 25 on socket 0 00:05:11.325 EAL: Detected lcore 16 as core 26 on socket 0 00:05:11.325 EAL: Detected lcore 17 as core 27 on socket 0 00:05:11.325 EAL: Detected lcore 18 as core 0 on socket 1 00:05:11.325 EAL: Detected lcore 19 as core 1 on socket 1 00:05:11.325 EAL: Detected lcore 20 as core 2 on socket 1 00:05:11.325 EAL: Detected lcore 21 as core 3 on socket 1 00:05:11.325 EAL: Detected lcore 22 as core 4 on socket 1 00:05:11.325 EAL: Detected lcore 23 as core 8 on socket 1 00:05:11.325 EAL: Detected lcore 24 as core 9 on socket 1 00:05:11.325 EAL: Detected lcore 25 as core 10 on socket 1 00:05:11.325 EAL: Detected lcore 26 as core 11 on socket 1 00:05:11.325 EAL: Detected lcore 27 as core 16 on socket 1 00:05:11.325 EAL: Detected lcore 28 as core 17 on socket 1 00:05:11.325 EAL: Detected lcore 29 as core 18 on socket 1 00:05:11.325 EAL: Detected lcore 30 as core 19 on socket 1 00:05:11.325 EAL: Detected lcore 31 as core 20 on socket 1 00:05:11.325 EAL: Detected lcore 32 as core 24 on socket 1 00:05:11.325 EAL: Detected lcore 33 as core 25 on socket 1 00:05:11.325 EAL: Detected lcore 34 as core 26 on socket 1 00:05:11.325 EAL: Detected lcore 35 as core 27 on socket 1 00:05:11.325 EAL: Detected lcore 36 as core 0 on socket 0 00:05:11.325 EAL: Detected lcore 37 as core 1 on socket 0 00:05:11.325 EAL: Detected lcore 38 as core 2 on socket 0 00:05:11.325 EAL: Detected lcore 39 as core 3 on socket 0 00:05:11.325 EAL: Detected lcore 40 as core 4 on socket 0 00:05:11.325 EAL: Detected lcore 41 as core 8 on socket 0 00:05:11.325 EAL: Detected lcore 42 as core 9 on socket 0 00:05:11.325 EAL: Detected lcore 43 as core 10 on socket 0 00:05:11.325 EAL: Detected lcore 44 as core 11 on socket 0 00:05:11.325 EAL: Detected lcore 45 as core 16 on socket 0 00:05:11.325 EAL: Detected lcore 46 as core 17 on socket 0 00:05:11.325 EAL: Detected lcore 47 as core 18 on socket 0 00:05:11.325 EAL: Detected lcore 48 as core 19 on socket 0 00:05:11.325 EAL: Detected lcore 49 as core 20 on socket 0 00:05:11.325 EAL: Detected lcore 50 as core 24 on socket 0 00:05:11.325 EAL: Detected lcore 51 as core 25 on socket 0 00:05:11.325 EAL: Detected lcore 52 as core 26 on socket 0 00:05:11.325 EAL: Detected lcore 53 as core 27 on socket 0 00:05:11.325 EAL: Detected lcore 54 as core 0 on socket 1 00:05:11.325 EAL: Detected lcore 55 as core 1 on socket 1 00:05:11.325 EAL: Detected lcore 56 as core 2 on socket 1 00:05:11.325 EAL: Detected lcore 57 as core 3 on socket 1 00:05:11.325 EAL: Detected lcore 58 as core 4 on socket 1 00:05:11.325 EAL: Detected lcore 59 as core 8 on socket 1 00:05:11.325 EAL: Detected lcore 60 as core 9 on socket 1 00:05:11.325 EAL: Detected lcore 61 as core 10 on socket 1 00:05:11.325 EAL: Detected lcore 62 as core 11 on socket 1 00:05:11.325 EAL: Detected lcore 63 as core 16 on socket 1 00:05:11.325 EAL: Detected lcore 64 as core 17 on socket 1 00:05:11.325 EAL: Detected lcore 65 as core 18 on socket 1 00:05:11.325 EAL: Detected lcore 66 as core 19 on socket 1 00:05:11.325 EAL: Detected lcore 67 as core 20 on socket 1 00:05:11.325 EAL: Detected lcore 68 as core 24 on socket 1 00:05:11.325 EAL: Detected lcore 69 as core 25 on socket 1 00:05:11.325 EAL: Detected lcore 70 as core 26 on socket 1 00:05:11.325 EAL: Detected lcore 71 as core 27 on socket 1 00:05:11.325 EAL: Maximum logical cores by configuration: 128 00:05:11.325 EAL: Detected CPU lcores: 72 00:05:11.325 EAL: Detected NUMA nodes: 2 00:05:11.325 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:11.325 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:11.325 EAL: Checking presence of .so 'librte_eal.so' 00:05:11.325 EAL: Detected static linkage of DPDK 00:05:11.325 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.325 EAL: Bus pci wants IOVA as 'DC' 00:05:11.325 EAL: Buses did not request a specific IOVA mode. 00:05:11.325 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:11.325 EAL: Selected IOVA mode 'VA' 00:05:11.325 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.325 EAL: Probing VFIO support... 00:05:11.325 EAL: IOMMU type 1 (Type 1) is supported 00:05:11.325 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:11.325 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:11.325 EAL: VFIO support initialized 00:05:11.325 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.325 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.325 EAL: Setting up physically contiguous memory... 00:05:11.325 EAL: Setting maximum number of open files to 524288 00:05:11.325 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.325 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:11.325 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.325 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:11.325 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.325 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:11.325 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.325 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.325 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:11.325 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:11.325 EAL: Hugepages will be freed exactly as allocated. 00:05:11.325 EAL: No shared files mode enabled, IPC is disabled 00:05:11.325 EAL: No shared files mode enabled, IPC is disabled 00:05:11.325 EAL: TSC frequency is ~2300000 KHz 00:05:11.325 EAL: Main lcore 0 is ready (tid=7fad74ba7a00;cpuset=[0]) 00:05:11.325 EAL: Trying to obtain current memory policy. 00:05:11.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.325 EAL: Restoring previous memory policy: 0 00:05:11.325 EAL: request: mp_malloc_sync 00:05:11.325 EAL: No shared files mode enabled, IPC is disabled 00:05:11.325 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.325 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.586 00:05:11.586 00:05:11.586 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.586 http://cunit.sourceforge.net/ 00:05:11.586 00:05:11.586 00:05:11.586 Suite: components_suite 00:05:11.586 Test: vtophys_malloc_test ...passed 00:05:11.586 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.586 EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.586 EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.586 EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.586 EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.586 EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.586 EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.586 EAL: Trying to obtain current memory policy. 00:05:11.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.586 EAL: Restoring previous memory policy: 4 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.586 EAL: request: mp_malloc_sync 00:05:11.586 EAL: No shared files mode enabled, IPC is disabled 00:05:11.586 EAL: Heap on socket 0 was expanded by 258MB 00:05:11.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.846 EAL: request: mp_malloc_sync 00:05:11.846 EAL: No shared files mode enabled, IPC is disabled 00:05:11.846 EAL: Heap on socket 0 was shrunk by 258MB 00:05:11.846 EAL: Trying to obtain current memory policy. 00:05:11.846 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.846 EAL: Restoring previous memory policy: 4 00:05:11.846 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.846 EAL: request: mp_malloc_sync 00:05:11.846 EAL: No shared files mode enabled, IPC is disabled 00:05:11.846 EAL: Heap on socket 0 was expanded by 514MB 00:05:11.846 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.105 EAL: request: mp_malloc_sync 00:05:12.105 EAL: No shared files mode enabled, IPC is disabled 00:05:12.105 EAL: Heap on socket 0 was shrunk by 514MB 00:05:12.105 EAL: Trying to obtain current memory policy. 00:05:12.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.365 EAL: Restoring previous memory policy: 4 00:05:12.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.365 EAL: request: mp_malloc_sync 00:05:12.365 EAL: No shared files mode enabled, IPC is disabled 00:05:12.365 EAL: Heap on socket 0 was expanded by 1026MB 00:05:12.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.624 EAL: request: mp_malloc_sync 00:05:12.624 EAL: No shared files mode enabled, IPC is disabled 00:05:12.624 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.624 passed 00:05:12.624 00:05:12.624 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.624 suites 1 1 n/a 0 0 00:05:12.624 tests 2 2 2 0 0 00:05:12.624 asserts 497 497 497 0 n/a 00:05:12.624 00:05:12.624 Elapsed time = 1.147 seconds 00:05:12.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.624 EAL: request: mp_malloc_sync 00:05:12.624 EAL: No shared files mode enabled, IPC is disabled 00:05:12.624 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.624 EAL: No shared files mode enabled, IPC is disabled 00:05:12.624 EAL: No shared files mode enabled, IPC is disabled 00:05:12.624 EAL: No shared files mode enabled, IPC is disabled 00:05:12.624 00:05:12.624 real 0m1.286s 00:05:12.624 user 0m0.739s 00:05:12.624 sys 0m0.515s 00:05:12.624 11:51:49 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.624 11:51:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.624 ************************************ 00:05:12.624 END TEST env_vtophys 00:05:12.624 ************************************ 00:05:12.624 11:51:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:12.624 11:51:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.624 11:51:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.624 11:51:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.624 ************************************ 00:05:12.624 START TEST env_pci 00:05:12.624 ************************************ 00:05:12.624 11:51:49 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:12.883 00:05:12.883 00:05:12.883 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.883 http://cunit.sourceforge.net/ 00:05:12.883 00:05:12.883 00:05:12.883 Suite: pci 00:05:12.883 Test: pci_hook ...[2024-07-25 11:51:49.936443] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 890264 has claimed it 00:05:12.883 EAL: Cannot find device (10000:00:01.0) 00:05:12.883 EAL: Failed to attach device on primary process 00:05:12.883 passed 00:05:12.883 00:05:12.883 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.883 suites 1 1 n/a 0 0 00:05:12.883 tests 1 1 1 0 0 00:05:12.883 asserts 25 25 25 0 n/a 00:05:12.883 00:05:12.884 Elapsed time = 0.033 seconds 00:05:12.884 00:05:12.884 real 0m0.054s 00:05:12.884 user 0m0.012s 00:05:12.884 sys 0m0.041s 00:05:12.884 11:51:49 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.884 11:51:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.884 ************************************ 00:05:12.884 END TEST env_pci 00:05:12.884 ************************************ 00:05:12.884 11:51:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.884 11:51:50 env -- env/env.sh@15 -- # uname 00:05:12.884 11:51:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.884 11:51:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.884 11:51:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.884 11:51:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:12.884 11:51:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.884 11:51:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.884 ************************************ 00:05:12.884 START TEST env_dpdk_post_init 00:05:12.884 ************************************ 00:05:12.884 11:51:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.884 EAL: Detected CPU lcores: 72 00:05:12.884 EAL: Detected NUMA nodes: 2 00:05:12.884 EAL: Detected static linkage of DPDK 00:05:12.884 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.884 EAL: Selected IOVA mode 'VA' 00:05:12.884 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.884 EAL: VFIO support initialized 00:05:12.884 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.142 EAL: Using IOMMU type 1 (Type 1) 00:05:13.142 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:05:13.401 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1) 00:05:13.661 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1) 00:05:13.661 EAL: Releasing PCI mapped resource for 0000:af:00.0 00:05:13.661 EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001004000 00:05:13.920 EAL: Releasing PCI mapped resource for 0000:b0:00.0 00:05:13.920 EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001008000 00:05:13.920 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:13.920 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:05:13.920 Starting DPDK initialization... 00:05:13.920 Starting SPDK post initialization... 00:05:13.920 SPDK NVMe probe 00:05:13.920 Attaching to 0000:5e:00.0 00:05:13.920 Attaching to 0000:af:00.0 00:05:13.920 Attaching to 0000:b0:00.0 00:05:13.920 Attached to 0000:af:00.0 00:05:13.920 Attached to 0000:b0:00.0 00:05:13.920 Attached to 0000:5e:00.0 00:05:13.920 Cleaning up... 00:05:13.920 00:05:13.920 real 0m1.149s 00:05:13.920 user 0m0.360s 00:05:13.920 sys 0m0.106s 00:05:13.920 11:51:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.920 11:51:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.920 ************************************ 00:05:13.920 END TEST env_dpdk_post_init 00:05:13.920 ************************************ 00:05:14.180 11:51:51 env -- env/env.sh@26 -- # uname 00:05:14.180 11:51:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:14.180 11:51:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.180 11:51:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.180 11:51:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.180 11:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.180 ************************************ 00:05:14.180 START TEST env_mem_callbacks 00:05:14.180 ************************************ 00:05:14.180 11:51:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.180 EAL: Detected CPU lcores: 72 00:05:14.180 EAL: Detected NUMA nodes: 2 00:05:14.180 EAL: Detected static linkage of DPDK 00:05:14.180 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.180 EAL: Selected IOVA mode 'VA' 00:05:14.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.180 EAL: VFIO support initialized 00:05:14.180 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.180 00:05:14.180 00:05:14.180 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.180 http://cunit.sourceforge.net/ 00:05:14.180 00:05:14.180 00:05:14.180 Suite: memory 00:05:14.180 Test: test ... 00:05:14.180 register 0x200000200000 2097152 00:05:14.180 malloc 3145728 00:05:14.180 register 0x200000400000 4194304 00:05:14.180 buf 0x200000500000 len 3145728 PASSED 00:05:14.180 malloc 64 00:05:14.180 buf 0x2000004fff40 len 64 PASSED 00:05:14.180 malloc 4194304 00:05:14.180 register 0x200000800000 6291456 00:05:14.180 buf 0x200000a00000 len 4194304 PASSED 00:05:14.180 free 0x200000500000 3145728 00:05:14.180 free 0x2000004fff40 64 00:05:14.180 unregister 0x200000400000 4194304 PASSED 00:05:14.180 free 0x200000a00000 4194304 00:05:14.180 unregister 0x200000800000 6291456 PASSED 00:05:14.180 malloc 8388608 00:05:14.180 register 0x200000400000 10485760 00:05:14.180 buf 0x200000600000 len 8388608 PASSED 00:05:14.180 free 0x200000600000 8388608 00:05:14.180 unregister 0x200000400000 10485760 PASSED 00:05:14.180 passed 00:05:14.180 00:05:14.180 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.180 suites 1 1 n/a 0 0 00:05:14.180 tests 1 1 1 0 0 00:05:14.180 asserts 15 15 15 0 n/a 00:05:14.180 00:05:14.180 Elapsed time = 0.009 seconds 00:05:14.180 00:05:14.180 real 0m0.075s 00:05:14.180 user 0m0.023s 00:05:14.180 sys 0m0.052s 00:05:14.180 11:51:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.180 11:51:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:14.180 ************************************ 00:05:14.180 END TEST env_mem_callbacks 00:05:14.180 ************************************ 00:05:14.180 00:05:14.180 real 0m3.190s 00:05:14.180 user 0m1.398s 00:05:14.180 sys 0m1.119s 00:05:14.180 11:51:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.180 11:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.180 ************************************ 00:05:14.180 END TEST env 00:05:14.180 ************************************ 00:05:14.180 11:51:51 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:14.180 11:51:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.180 11:51:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.180 11:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 ************************************ 00:05:14.440 START TEST rpc 00:05:14.440 ************************************ 00:05:14.440 11:51:51 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:14.440 * Looking for test storage... 00:05:14.440 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:14.440 11:51:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=890557 00:05:14.440 11:51:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.440 11:51:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:14.440 11:51:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 890557 00:05:14.440 11:51:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 890557 ']' 00:05:14.440 11:51:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.440 11:51:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.440 11:51:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.440 11:51:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.440 11:51:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 [2024-07-25 11:51:51.641280] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:14.440 [2024-07-25 11:51:51.641371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890557 ] 00:05:14.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.440 [2024-07-25 11:51:51.726152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.700 [2024-07-25 11:51:51.817697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:14.700 [2024-07-25 11:51:51.817741] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 890557' to capture a snapshot of events at runtime. 00:05:14.700 [2024-07-25 11:51:51.817754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:14.700 [2024-07-25 11:51:51.817763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:14.700 [2024-07-25 11:51:51.817771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid890557 for offline analysis/debug. 00:05:14.700 [2024-07-25 11:51:51.817799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.269 11:51:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.269 11:51:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:15.269 11:51:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:15.269 11:51:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:15.269 11:51:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:15.269 11:51:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:15.269 11:51:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.269 11:51:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.269 11:51:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.269 ************************************ 00:05:15.269 START TEST rpc_integrity 00:05:15.269 ************************************ 00:05:15.269 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:15.269 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.269 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.269 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.269 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.269 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.269 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.269 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.269 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.269 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.269 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.529 { 00:05:15.529 "name": "Malloc0", 00:05:15.529 "aliases": [ 00:05:15.529 "91272434-dea2-41ab-9426-28e7bce979e7" 00:05:15.529 ], 00:05:15.529 "product_name": "Malloc disk", 00:05:15.529 "block_size": 512, 00:05:15.529 "num_blocks": 16384, 00:05:15.529 "uuid": "91272434-dea2-41ab-9426-28e7bce979e7", 00:05:15.529 "assigned_rate_limits": { 00:05:15.529 "rw_ios_per_sec": 0, 00:05:15.529 "rw_mbytes_per_sec": 0, 00:05:15.529 "r_mbytes_per_sec": 0, 00:05:15.529 "w_mbytes_per_sec": 0 00:05:15.529 }, 00:05:15.529 "claimed": false, 00:05:15.529 "zoned": false, 00:05:15.529 "supported_io_types": { 00:05:15.529 "read": true, 00:05:15.529 "write": true, 00:05:15.529 "unmap": true, 00:05:15.529 "flush": true, 00:05:15.529 "reset": true, 00:05:15.529 "nvme_admin": false, 00:05:15.529 "nvme_io": false, 00:05:15.529 "nvme_io_md": false, 00:05:15.529 "write_zeroes": true, 00:05:15.529 "zcopy": true, 00:05:15.529 "get_zone_info": false, 00:05:15.529 "zone_management": false, 00:05:15.529 "zone_append": false, 00:05:15.529 "compare": false, 00:05:15.529 "compare_and_write": false, 00:05:15.529 "abort": true, 00:05:15.529 "seek_hole": false, 00:05:15.529 "seek_data": false, 00:05:15.529 "copy": true, 00:05:15.529 "nvme_iov_md": false 00:05:15.529 }, 00:05:15.529 "memory_domains": [ 00:05:15.529 { 00:05:15.529 "dma_device_id": "system", 00:05:15.529 "dma_device_type": 1 00:05:15.529 }, 00:05:15.529 { 00:05:15.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.529 "dma_device_type": 2 00:05:15.529 } 00:05:15.529 ], 00:05:15.529 "driver_specific": {} 00:05:15.529 } 00:05:15.529 ]' 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.529 [2024-07-25 11:51:52.644374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:15.529 [2024-07-25 11:51:52.644408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.529 [2024-07-25 11:51:52.644425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x52ac120 00:05:15.529 [2024-07-25 11:51:52.644434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.529 [2024-07-25 11:51:52.645262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.529 [2024-07-25 11:51:52.645286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.529 Passthru0 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.529 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.529 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.529 { 00:05:15.529 "name": "Malloc0", 00:05:15.529 "aliases": [ 00:05:15.529 "91272434-dea2-41ab-9426-28e7bce979e7" 00:05:15.529 ], 00:05:15.529 "product_name": "Malloc disk", 00:05:15.529 "block_size": 512, 00:05:15.529 "num_blocks": 16384, 00:05:15.529 "uuid": "91272434-dea2-41ab-9426-28e7bce979e7", 00:05:15.529 "assigned_rate_limits": { 00:05:15.529 "rw_ios_per_sec": 0, 00:05:15.529 "rw_mbytes_per_sec": 0, 00:05:15.529 "r_mbytes_per_sec": 0, 00:05:15.529 "w_mbytes_per_sec": 0 00:05:15.529 }, 00:05:15.529 "claimed": true, 00:05:15.529 "claim_type": "exclusive_write", 00:05:15.529 "zoned": false, 00:05:15.529 "supported_io_types": { 00:05:15.529 "read": true, 00:05:15.529 "write": true, 00:05:15.529 "unmap": true, 00:05:15.529 "flush": true, 00:05:15.529 "reset": true, 00:05:15.529 "nvme_admin": false, 00:05:15.529 "nvme_io": false, 00:05:15.529 "nvme_io_md": false, 00:05:15.529 "write_zeroes": true, 00:05:15.529 "zcopy": true, 00:05:15.529 "get_zone_info": false, 00:05:15.529 "zone_management": false, 00:05:15.529 "zone_append": false, 00:05:15.529 "compare": false, 00:05:15.529 "compare_and_write": false, 00:05:15.529 "abort": true, 00:05:15.529 "seek_hole": false, 00:05:15.529 "seek_data": false, 00:05:15.529 "copy": true, 00:05:15.529 "nvme_iov_md": false 00:05:15.529 }, 00:05:15.529 "memory_domains": [ 00:05:15.529 { 00:05:15.529 "dma_device_id": "system", 00:05:15.529 "dma_device_type": 1 00:05:15.529 }, 00:05:15.529 { 00:05:15.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.529 "dma_device_type": 2 00:05:15.529 } 00:05:15.529 ], 00:05:15.529 "driver_specific": {} 00:05:15.529 }, 00:05:15.529 { 00:05:15.529 "name": "Passthru0", 00:05:15.529 "aliases": [ 00:05:15.529 "13e960a5-fc61-5001-a0e2-a7d1316a5541" 00:05:15.529 ], 00:05:15.529 "product_name": "passthru", 00:05:15.529 "block_size": 512, 00:05:15.529 "num_blocks": 16384, 00:05:15.530 "uuid": "13e960a5-fc61-5001-a0e2-a7d1316a5541", 00:05:15.530 "assigned_rate_limits": { 00:05:15.530 "rw_ios_per_sec": 0, 00:05:15.530 "rw_mbytes_per_sec": 0, 00:05:15.530 "r_mbytes_per_sec": 0, 00:05:15.530 "w_mbytes_per_sec": 0 00:05:15.530 }, 00:05:15.530 "claimed": false, 00:05:15.530 "zoned": false, 00:05:15.530 "supported_io_types": { 00:05:15.530 "read": true, 00:05:15.530 "write": true, 00:05:15.530 "unmap": true, 00:05:15.530 "flush": true, 00:05:15.530 "reset": true, 00:05:15.530 "nvme_admin": false, 00:05:15.530 "nvme_io": false, 00:05:15.530 "nvme_io_md": false, 00:05:15.530 "write_zeroes": true, 00:05:15.530 "zcopy": true, 00:05:15.530 "get_zone_info": false, 00:05:15.530 "zone_management": false, 00:05:15.530 "zone_append": false, 00:05:15.530 "compare": false, 00:05:15.530 "compare_and_write": false, 00:05:15.530 "abort": true, 00:05:15.530 "seek_hole": false, 00:05:15.530 "seek_data": false, 00:05:15.530 "copy": true, 00:05:15.530 "nvme_iov_md": false 00:05:15.530 }, 00:05:15.530 "memory_domains": [ 00:05:15.530 { 00:05:15.530 "dma_device_id": "system", 00:05:15.530 "dma_device_type": 1 00:05:15.530 }, 00:05:15.530 { 00:05:15.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.530 "dma_device_type": 2 00:05:15.530 } 00:05:15.530 ], 00:05:15.530 "driver_specific": { 00:05:15.530 "passthru": { 00:05:15.530 "name": "Passthru0", 00:05:15.530 "base_bdev_name": "Malloc0" 00:05:15.530 } 00:05:15.530 } 00:05:15.530 } 00:05:15.530 ]' 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.530 11:51:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.530 00:05:15.530 real 0m0.280s 00:05:15.530 user 0m0.178s 00:05:15.530 sys 0m0.041s 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.530 11:51:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 ************************************ 00:05:15.530 END TEST rpc_integrity 00:05:15.530 ************************************ 00:05:15.530 11:51:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:15.530 11:51:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.530 11:51:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.530 11:51:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.789 ************************************ 00:05:15.789 START TEST rpc_plugins 00:05:15.789 ************************************ 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:15.789 { 00:05:15.789 "name": "Malloc1", 00:05:15.789 "aliases": [ 00:05:15.789 "3c2fd65d-a5da-45f4-a960-8c3c35e2f200" 00:05:15.789 ], 00:05:15.789 "product_name": "Malloc disk", 00:05:15.789 "block_size": 4096, 00:05:15.789 "num_blocks": 256, 00:05:15.789 "uuid": "3c2fd65d-a5da-45f4-a960-8c3c35e2f200", 00:05:15.789 "assigned_rate_limits": { 00:05:15.789 "rw_ios_per_sec": 0, 00:05:15.789 "rw_mbytes_per_sec": 0, 00:05:15.789 "r_mbytes_per_sec": 0, 00:05:15.789 "w_mbytes_per_sec": 0 00:05:15.789 }, 00:05:15.789 "claimed": false, 00:05:15.789 "zoned": false, 00:05:15.789 "supported_io_types": { 00:05:15.789 "read": true, 00:05:15.789 "write": true, 00:05:15.789 "unmap": true, 00:05:15.789 "flush": true, 00:05:15.789 "reset": true, 00:05:15.789 "nvme_admin": false, 00:05:15.789 "nvme_io": false, 00:05:15.789 "nvme_io_md": false, 00:05:15.789 "write_zeroes": true, 00:05:15.789 "zcopy": true, 00:05:15.789 "get_zone_info": false, 00:05:15.789 "zone_management": false, 00:05:15.789 "zone_append": false, 00:05:15.789 "compare": false, 00:05:15.789 "compare_and_write": false, 00:05:15.789 "abort": true, 00:05:15.789 "seek_hole": false, 00:05:15.789 "seek_data": false, 00:05:15.789 "copy": true, 00:05:15.789 "nvme_iov_md": false 00:05:15.789 }, 00:05:15.789 "memory_domains": [ 00:05:15.789 { 00:05:15.789 "dma_device_id": "system", 00:05:15.789 "dma_device_type": 1 00:05:15.789 }, 00:05:15.789 { 00:05:15.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.789 "dma_device_type": 2 00:05:15.789 } 00:05:15.789 ], 00:05:15.789 "driver_specific": {} 00:05:15.789 } 00:05:15.789 ]' 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.789 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.789 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.790 11:51:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.790 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.790 11:51:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.790 11:51:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.790 00:05:15.790 real 0m0.154s 00:05:15.790 user 0m0.086s 00:05:15.790 sys 0m0.031s 00:05:15.790 11:51:53 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.790 11:51:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.790 ************************************ 00:05:15.790 END TEST rpc_plugins 00:05:15.790 ************************************ 00:05:15.790 11:51:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.790 11:51:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.790 11:51:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.790 11:51:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.049 ************************************ 00:05:16.049 START TEST rpc_trace_cmd_test 00:05:16.049 ************************************ 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.049 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid890557", 00:05:16.049 "tpoint_group_mask": "0x8", 00:05:16.049 "iscsi_conn": { 00:05:16.049 "mask": "0x2", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "scsi": { 00:05:16.049 "mask": "0x4", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "bdev": { 00:05:16.049 "mask": "0x8", 00:05:16.049 "tpoint_mask": "0xffffffffffffffff" 00:05:16.049 }, 00:05:16.049 "nvmf_rdma": { 00:05:16.049 "mask": "0x10", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "nvmf_tcp": { 00:05:16.049 "mask": "0x20", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "ftl": { 00:05:16.049 "mask": "0x40", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "blobfs": { 00:05:16.049 "mask": "0x80", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "dsa": { 00:05:16.049 "mask": "0x200", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "thread": { 00:05:16.049 "mask": "0x400", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "nvme_pcie": { 00:05:16.049 "mask": "0x800", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "iaa": { 00:05:16.049 "mask": "0x1000", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "nvme_tcp": { 00:05:16.049 "mask": "0x2000", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "bdev_nvme": { 00:05:16.049 "mask": "0x4000", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 }, 00:05:16.049 "sock": { 00:05:16.049 "mask": "0x8000", 00:05:16.049 "tpoint_mask": "0x0" 00:05:16.049 } 00:05:16.049 }' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:16.049 00:05:16.049 real 0m0.216s 00:05:16.049 user 0m0.173s 00:05:16.049 sys 0m0.035s 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.049 11:51:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.049 ************************************ 00:05:16.049 END TEST rpc_trace_cmd_test 00:05:16.049 ************************************ 00:05:16.308 11:51:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:16.308 11:51:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:16.308 11:51:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:16.308 11:51:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.308 11:51:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.308 11:51:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.308 ************************************ 00:05:16.308 START TEST rpc_daemon_integrity 00:05:16.308 ************************************ 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.308 { 00:05:16.308 "name": "Malloc2", 00:05:16.308 "aliases": [ 00:05:16.308 "6d51384b-34d8-4d41-b535-cd2d7f1168eb" 00:05:16.308 ], 00:05:16.308 "product_name": "Malloc disk", 00:05:16.308 "block_size": 512, 00:05:16.308 "num_blocks": 16384, 00:05:16.308 "uuid": "6d51384b-34d8-4d41-b535-cd2d7f1168eb", 00:05:16.308 "assigned_rate_limits": { 00:05:16.308 "rw_ios_per_sec": 0, 00:05:16.308 "rw_mbytes_per_sec": 0, 00:05:16.308 "r_mbytes_per_sec": 0, 00:05:16.308 "w_mbytes_per_sec": 0 00:05:16.308 }, 00:05:16.308 "claimed": false, 00:05:16.308 "zoned": false, 00:05:16.308 "supported_io_types": { 00:05:16.308 "read": true, 00:05:16.308 "write": true, 00:05:16.308 "unmap": true, 00:05:16.308 "flush": true, 00:05:16.308 "reset": true, 00:05:16.308 "nvme_admin": false, 00:05:16.308 "nvme_io": false, 00:05:16.308 "nvme_io_md": false, 00:05:16.308 "write_zeroes": true, 00:05:16.308 "zcopy": true, 00:05:16.308 "get_zone_info": false, 00:05:16.308 "zone_management": false, 00:05:16.308 "zone_append": false, 00:05:16.308 "compare": false, 00:05:16.308 "compare_and_write": false, 00:05:16.308 "abort": true, 00:05:16.308 "seek_hole": false, 00:05:16.308 "seek_data": false, 00:05:16.308 "copy": true, 00:05:16.308 "nvme_iov_md": false 00:05:16.308 }, 00:05:16.308 "memory_domains": [ 00:05:16.308 { 00:05:16.308 "dma_device_id": "system", 00:05:16.308 "dma_device_type": 1 00:05:16.308 }, 00:05:16.308 { 00:05:16.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.308 "dma_device_type": 2 00:05:16.308 } 00:05:16.308 ], 00:05:16.308 "driver_specific": {} 00:05:16.308 } 00:05:16.308 ]' 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.308 [2024-07-25 11:51:53.546687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:16.308 [2024-07-25 11:51:53.546718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.308 [2024-07-25 11:51:53.546740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x52ac350 00:05:16.308 [2024-07-25 11:51:53.546749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.308 [2024-07-25 11:51:53.547462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.308 [2024-07-25 11:51:53.547486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.308 Passthru0 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.308 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.308 { 00:05:16.308 "name": "Malloc2", 00:05:16.308 "aliases": [ 00:05:16.308 "6d51384b-34d8-4d41-b535-cd2d7f1168eb" 00:05:16.308 ], 00:05:16.308 "product_name": "Malloc disk", 00:05:16.308 "block_size": 512, 00:05:16.308 "num_blocks": 16384, 00:05:16.308 "uuid": "6d51384b-34d8-4d41-b535-cd2d7f1168eb", 00:05:16.308 "assigned_rate_limits": { 00:05:16.308 "rw_ios_per_sec": 0, 00:05:16.308 "rw_mbytes_per_sec": 0, 00:05:16.308 "r_mbytes_per_sec": 0, 00:05:16.308 "w_mbytes_per_sec": 0 00:05:16.308 }, 00:05:16.308 "claimed": true, 00:05:16.308 "claim_type": "exclusive_write", 00:05:16.308 "zoned": false, 00:05:16.308 "supported_io_types": { 00:05:16.308 "read": true, 00:05:16.308 "write": true, 00:05:16.308 "unmap": true, 00:05:16.308 "flush": true, 00:05:16.308 "reset": true, 00:05:16.308 "nvme_admin": false, 00:05:16.308 "nvme_io": false, 00:05:16.308 "nvme_io_md": false, 00:05:16.308 "write_zeroes": true, 00:05:16.308 "zcopy": true, 00:05:16.308 "get_zone_info": false, 00:05:16.308 "zone_management": false, 00:05:16.308 "zone_append": false, 00:05:16.308 "compare": false, 00:05:16.308 "compare_and_write": false, 00:05:16.308 "abort": true, 00:05:16.308 "seek_hole": false, 00:05:16.308 "seek_data": false, 00:05:16.308 "copy": true, 00:05:16.308 "nvme_iov_md": false 00:05:16.308 }, 00:05:16.308 "memory_domains": [ 00:05:16.308 { 00:05:16.308 "dma_device_id": "system", 00:05:16.308 "dma_device_type": 1 00:05:16.308 }, 00:05:16.308 { 00:05:16.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.308 "dma_device_type": 2 00:05:16.308 } 00:05:16.308 ], 00:05:16.308 "driver_specific": {} 00:05:16.308 }, 00:05:16.308 { 00:05:16.308 "name": "Passthru0", 00:05:16.308 "aliases": [ 00:05:16.308 "59fd97de-7211-5f45-ad67-2b0c676e6f0e" 00:05:16.308 ], 00:05:16.308 "product_name": "passthru", 00:05:16.308 "block_size": 512, 00:05:16.308 "num_blocks": 16384, 00:05:16.308 "uuid": "59fd97de-7211-5f45-ad67-2b0c676e6f0e", 00:05:16.308 "assigned_rate_limits": { 00:05:16.308 "rw_ios_per_sec": 0, 00:05:16.308 "rw_mbytes_per_sec": 0, 00:05:16.308 "r_mbytes_per_sec": 0, 00:05:16.308 "w_mbytes_per_sec": 0 00:05:16.308 }, 00:05:16.308 "claimed": false, 00:05:16.308 "zoned": false, 00:05:16.308 "supported_io_types": { 00:05:16.308 "read": true, 00:05:16.308 "write": true, 00:05:16.308 "unmap": true, 00:05:16.308 "flush": true, 00:05:16.308 "reset": true, 00:05:16.308 "nvme_admin": false, 00:05:16.308 "nvme_io": false, 00:05:16.308 "nvme_io_md": false, 00:05:16.308 "write_zeroes": true, 00:05:16.308 "zcopy": true, 00:05:16.308 "get_zone_info": false, 00:05:16.308 "zone_management": false, 00:05:16.308 "zone_append": false, 00:05:16.308 "compare": false, 00:05:16.308 "compare_and_write": false, 00:05:16.308 "abort": true, 00:05:16.308 "seek_hole": false, 00:05:16.308 "seek_data": false, 00:05:16.308 "copy": true, 00:05:16.308 "nvme_iov_md": false 00:05:16.308 }, 00:05:16.308 "memory_domains": [ 00:05:16.308 { 00:05:16.308 "dma_device_id": "system", 00:05:16.308 "dma_device_type": 1 00:05:16.308 }, 00:05:16.308 { 00:05:16.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.308 "dma_device_type": 2 00:05:16.309 } 00:05:16.309 ], 00:05:16.309 "driver_specific": { 00:05:16.309 "passthru": { 00:05:16.309 "name": "Passthru0", 00:05:16.309 "base_bdev_name": "Malloc2" 00:05:16.309 } 00:05:16.309 } 00:05:16.309 } 00:05:16.309 ]' 00:05:16.309 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.568 00:05:16.568 real 0m0.296s 00:05:16.568 user 0m0.179s 00:05:16.568 sys 0m0.054s 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.568 11:51:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.568 ************************************ 00:05:16.568 END TEST rpc_daemon_integrity 00:05:16.568 ************************************ 00:05:16.568 11:51:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:16.568 11:51:53 rpc -- rpc/rpc.sh@84 -- # killprocess 890557 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 890557 ']' 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@954 -- # kill -0 890557 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@955 -- # uname 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 890557 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 890557' 00:05:16.568 killing process with pid 890557 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@969 -- # kill 890557 00:05:16.568 11:51:53 rpc -- common/autotest_common.sh@974 -- # wait 890557 00:05:17.136 00:05:17.137 real 0m2.631s 00:05:17.137 user 0m3.267s 00:05:17.137 sys 0m0.872s 00:05:17.137 11:51:54 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.137 11:51:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.137 ************************************ 00:05:17.137 END TEST rpc 00:05:17.137 ************************************ 00:05:17.137 11:51:54 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.137 11:51:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.137 11:51:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.137 11:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:17.137 ************************************ 00:05:17.137 START TEST skip_rpc 00:05:17.137 ************************************ 00:05:17.137 11:51:54 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.137 * Looking for test storage... 00:05:17.137 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:17.137 11:51:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:17.137 11:51:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:17.137 11:51:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.137 11:51:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.137 11:51:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.137 11:51:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.137 ************************************ 00:05:17.137 START TEST skip_rpc 00:05:17.137 ************************************ 00:05:17.137 11:51:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:17.137 11:51:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=891098 00:05:17.137 11:51:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.137 11:51:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:17.137 11:51:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:17.137 [2024-07-25 11:51:54.399027] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:17.137 [2024-07-25 11:51:54.399093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891098 ] 00:05:17.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.396 [2024-07-25 11:51:54.480354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.396 [2024-07-25 11:51:54.560903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 891098 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 891098 ']' 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 891098 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 891098 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 891098' 00:05:22.710 killing process with pid 891098 00:05:22.710 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 891098 00:05:22.711 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 891098 00:05:22.711 00:05:22.711 real 0m5.393s 00:05:22.711 user 0m5.126s 00:05:22.711 sys 0m0.304s 00:05:22.711 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.711 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.711 ************************************ 00:05:22.711 END TEST skip_rpc 00:05:22.711 ************************************ 00:05:22.711 11:51:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:22.711 11:51:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.711 11:51:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.711 11:51:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.711 ************************************ 00:05:22.711 START TEST skip_rpc_with_json 00:05:22.711 ************************************ 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=891854 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 891854 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 891854 ']' 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.711 11:51:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.711 [2024-07-25 11:51:59.876506] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:22.711 [2024-07-25 11:51:59.876576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891854 ] 00:05:22.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.711 [2024-07-25 11:51:59.963682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.970 [2024-07-25 11:52:00.058420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.538 [2024-07-25 11:52:00.713772] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:23.538 request: 00:05:23.538 { 00:05:23.538 "trtype": "tcp", 00:05:23.538 "method": "nvmf_get_transports", 00:05:23.538 "req_id": 1 00:05:23.538 } 00:05:23.538 Got JSON-RPC error response 00:05:23.538 response: 00:05:23.538 { 00:05:23.538 "code": -19, 00:05:23.538 "message": "No such device" 00:05:23.538 } 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.538 [2024-07-25 11:52:00.725885] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.538 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:23.797 { 00:05:23.797 "subsystems": [ 00:05:23.797 { 00:05:23.797 "subsystem": "scheduler", 00:05:23.797 "config": [ 00:05:23.797 { 00:05:23.797 "method": "framework_set_scheduler", 00:05:23.797 "params": { 00:05:23.797 "name": "static" 00:05:23.797 } 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "vmd", 00:05:23.797 "config": [] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "sock", 00:05:23.797 "config": [ 00:05:23.797 { 00:05:23.797 "method": "sock_set_default_impl", 00:05:23.797 "params": { 00:05:23.797 "impl_name": "posix" 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "sock_impl_set_options", 00:05:23.797 "params": { 00:05:23.797 "impl_name": "ssl", 00:05:23.797 "recv_buf_size": 4096, 00:05:23.797 "send_buf_size": 4096, 00:05:23.797 "enable_recv_pipe": true, 00:05:23.797 "enable_quickack": false, 00:05:23.797 "enable_placement_id": 0, 00:05:23.797 "enable_zerocopy_send_server": true, 00:05:23.797 "enable_zerocopy_send_client": false, 00:05:23.797 "zerocopy_threshold": 0, 00:05:23.797 "tls_version": 0, 00:05:23.797 "enable_ktls": false 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "sock_impl_set_options", 00:05:23.797 "params": { 00:05:23.797 "impl_name": "posix", 00:05:23.797 "recv_buf_size": 2097152, 00:05:23.797 "send_buf_size": 2097152, 00:05:23.797 "enable_recv_pipe": true, 00:05:23.797 "enable_quickack": false, 00:05:23.797 "enable_placement_id": 0, 00:05:23.797 "enable_zerocopy_send_server": true, 00:05:23.797 "enable_zerocopy_send_client": false, 00:05:23.797 "zerocopy_threshold": 0, 00:05:23.797 "tls_version": 0, 00:05:23.797 "enable_ktls": false 00:05:23.797 } 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "iobuf", 00:05:23.797 "config": [ 00:05:23.797 { 00:05:23.797 "method": "iobuf_set_options", 00:05:23.797 "params": { 00:05:23.797 "small_pool_count": 8192, 00:05:23.797 "large_pool_count": 1024, 00:05:23.797 "small_bufsize": 8192, 00:05:23.797 "large_bufsize": 135168 00:05:23.797 } 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "keyring", 00:05:23.797 "config": [] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "vfio_user_target", 00:05:23.797 "config": null 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "accel", 00:05:23.797 "config": [ 00:05:23.797 { 00:05:23.797 "method": "accel_set_options", 00:05:23.797 "params": { 00:05:23.797 "small_cache_size": 128, 00:05:23.797 "large_cache_size": 16, 00:05:23.797 "task_count": 2048, 00:05:23.797 "sequence_count": 2048, 00:05:23.797 "buf_count": 2048 00:05:23.797 } 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "bdev", 00:05:23.797 "config": [ 00:05:23.797 { 00:05:23.797 "method": "bdev_set_options", 00:05:23.797 "params": { 00:05:23.797 "bdev_io_pool_size": 65535, 00:05:23.797 "bdev_io_cache_size": 256, 00:05:23.797 "bdev_auto_examine": true, 00:05:23.797 "iobuf_small_cache_size": 128, 00:05:23.797 "iobuf_large_cache_size": 16 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "bdev_raid_set_options", 00:05:23.797 "params": { 00:05:23.797 "process_window_size_kb": 1024, 00:05:23.797 "process_max_bandwidth_mb_sec": 0 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "bdev_nvme_set_options", 00:05:23.797 "params": { 00:05:23.797 "action_on_timeout": "none", 00:05:23.797 "timeout_us": 0, 00:05:23.797 "timeout_admin_us": 0, 00:05:23.797 "keep_alive_timeout_ms": 10000, 00:05:23.797 "arbitration_burst": 0, 00:05:23.797 "low_priority_weight": 0, 00:05:23.797 "medium_priority_weight": 0, 00:05:23.797 "high_priority_weight": 0, 00:05:23.797 "nvme_adminq_poll_period_us": 10000, 00:05:23.797 "nvme_ioq_poll_period_us": 0, 00:05:23.797 "io_queue_requests": 0, 00:05:23.797 "delay_cmd_submit": true, 00:05:23.797 "transport_retry_count": 4, 00:05:23.797 "bdev_retry_count": 3, 00:05:23.797 "transport_ack_timeout": 0, 00:05:23.797 "ctrlr_loss_timeout_sec": 0, 00:05:23.797 "reconnect_delay_sec": 0, 00:05:23.797 "fast_io_fail_timeout_sec": 0, 00:05:23.797 "disable_auto_failback": false, 00:05:23.797 "generate_uuids": false, 00:05:23.797 "transport_tos": 0, 00:05:23.797 "nvme_error_stat": false, 00:05:23.797 "rdma_srq_size": 0, 00:05:23.797 "io_path_stat": false, 00:05:23.797 "allow_accel_sequence": false, 00:05:23.797 "rdma_max_cq_size": 0, 00:05:23.797 "rdma_cm_event_timeout_ms": 0, 00:05:23.797 "dhchap_digests": [ 00:05:23.797 "sha256", 00:05:23.797 "sha384", 00:05:23.797 "sha512" 00:05:23.797 ], 00:05:23.797 "dhchap_dhgroups": [ 00:05:23.797 "null", 00:05:23.797 "ffdhe2048", 00:05:23.797 "ffdhe3072", 00:05:23.797 "ffdhe4096", 00:05:23.797 "ffdhe6144", 00:05:23.797 "ffdhe8192" 00:05:23.797 ] 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "bdev_nvme_set_hotplug", 00:05:23.797 "params": { 00:05:23.797 "period_us": 100000, 00:05:23.797 "enable": false 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "bdev_iscsi_set_options", 00:05:23.797 "params": { 00:05:23.797 "timeout_sec": 30 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "bdev_wait_for_examine" 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "nvmf", 00:05:23.797 "config": [ 00:05:23.797 { 00:05:23.797 "method": "nvmf_set_config", 00:05:23.797 "params": { 00:05:23.797 "discovery_filter": "match_any", 00:05:23.797 "admin_cmd_passthru": { 00:05:23.797 "identify_ctrlr": false 00:05:23.797 } 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "nvmf_set_max_subsystems", 00:05:23.797 "params": { 00:05:23.797 "max_subsystems": 1024 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "nvmf_set_crdt", 00:05:23.797 "params": { 00:05:23.797 "crdt1": 0, 00:05:23.797 "crdt2": 0, 00:05:23.797 "crdt3": 0 00:05:23.797 } 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "method": "nvmf_create_transport", 00:05:23.797 "params": { 00:05:23.797 "trtype": "TCP", 00:05:23.797 "max_queue_depth": 128, 00:05:23.797 "max_io_qpairs_per_ctrlr": 127, 00:05:23.797 "in_capsule_data_size": 4096, 00:05:23.797 "max_io_size": 131072, 00:05:23.797 "io_unit_size": 131072, 00:05:23.797 "max_aq_depth": 128, 00:05:23.797 "num_shared_buffers": 511, 00:05:23.797 "buf_cache_size": 4294967295, 00:05:23.797 "dif_insert_or_strip": false, 00:05:23.797 "zcopy": false, 00:05:23.797 "c2h_success": true, 00:05:23.797 "sock_priority": 0, 00:05:23.797 "abort_timeout_sec": 1, 00:05:23.797 "ack_timeout": 0, 00:05:23.797 "data_wr_pool_size": 0 00:05:23.797 } 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "nbd", 00:05:23.797 "config": [] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "ublk", 00:05:23.797 "config": [] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "vhost_blk", 00:05:23.797 "config": [] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "scsi", 00:05:23.797 "config": null 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "iscsi", 00:05:23.797 "config": [ 00:05:23.797 { 00:05:23.797 "method": "iscsi_set_options", 00:05:23.797 "params": { 00:05:23.797 "node_base": "iqn.2016-06.io.spdk", 00:05:23.797 "max_sessions": 128, 00:05:23.797 "max_connections_per_session": 2, 00:05:23.797 "max_queue_depth": 64, 00:05:23.797 "default_time2wait": 2, 00:05:23.797 "default_time2retain": 20, 00:05:23.797 "first_burst_length": 8192, 00:05:23.797 "immediate_data": true, 00:05:23.797 "allow_duplicated_isid": false, 00:05:23.797 "error_recovery_level": 0, 00:05:23.797 "nop_timeout": 60, 00:05:23.797 "nop_in_interval": 30, 00:05:23.797 "disable_chap": false, 00:05:23.797 "require_chap": false, 00:05:23.797 "mutual_chap": false, 00:05:23.797 "chap_group": 0, 00:05:23.797 "max_large_datain_per_connection": 64, 00:05:23.797 "max_r2t_per_connection": 4, 00:05:23.797 "pdu_pool_size": 36864, 00:05:23.797 "immediate_data_pool_size": 16384, 00:05:23.797 "data_out_pool_size": 2048 00:05:23.797 } 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 }, 00:05:23.797 { 00:05:23.797 "subsystem": "vhost_scsi", 00:05:23.797 "config": [] 00:05:23.797 } 00:05:23.797 ] 00:05:23.797 } 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 891854 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 891854 ']' 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 891854 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 891854 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 891854' 00:05:23.797 killing process with pid 891854 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 891854 00:05:23.797 11:52:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 891854 00:05:24.055 11:52:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=892050 00:05:24.055 11:52:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:24.055 11:52:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 892050 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 892050 ']' 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 892050 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 892050 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 892050' 00:05:29.327 killing process with pid 892050 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 892050 00:05:29.327 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 892050 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:29.588 00:05:29.588 real 0m6.826s 00:05:29.588 user 0m6.538s 00:05:29.588 sys 0m0.727s 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.588 ************************************ 00:05:29.588 END TEST skip_rpc_with_json 00:05:29.588 ************************************ 00:05:29.588 11:52:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:29.588 11:52:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.588 11:52:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.588 11:52:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.588 ************************************ 00:05:29.588 START TEST skip_rpc_with_delay 00:05:29.588 ************************************ 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.588 [2024-07-25 11:52:06.788525] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:29.588 [2024-07-25 11:52:06.788658] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.588 00:05:29.588 real 0m0.045s 00:05:29.588 user 0m0.014s 00:05:29.588 sys 0m0.031s 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.588 11:52:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:29.588 ************************************ 00:05:29.588 END TEST skip_rpc_with_delay 00:05:29.588 ************************************ 00:05:29.588 11:52:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.588 11:52:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.588 11:52:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.588 11:52:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.588 11:52:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.588 11:52:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.848 ************************************ 00:05:29.848 START TEST exit_on_failed_rpc_init 00:05:29.848 ************************************ 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=892829 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 892829 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 892829 ']' 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.848 11:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.848 [2024-07-25 11:52:06.911185] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:29.848 [2024-07-25 11:52:06.911257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892829 ] 00:05:29.848 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.848 [2024-07-25 11:52:06.994868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.848 [2024-07-25 11:52:07.086183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.783 11:52:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.783 [2024-07-25 11:52:07.776187] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:30.783 [2024-07-25 11:52:07.776276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893014 ] 00:05:30.783 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.783 [2024-07-25 11:52:07.860827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.783 [2024-07-25 11:52:07.943056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.783 [2024-07-25 11:52:07.943141] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.783 [2024-07-25 11:52:07.943153] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.783 [2024-07-25 11:52:07.943161] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 892829 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 892829 ']' 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 892829 00:05:30.783 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:30.784 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.784 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 892829 00:05:30.784 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.784 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.784 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 892829' 00:05:30.784 killing process with pid 892829 00:05:30.784 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 892829 00:05:30.784 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 892829 00:05:31.352 00:05:31.352 real 0m1.516s 00:05:31.352 user 0m1.688s 00:05:31.352 sys 0m0.475s 00:05:31.352 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.352 11:52:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.352 ************************************ 00:05:31.352 END TEST exit_on_failed_rpc_init 00:05:31.352 ************************************ 00:05:31.352 11:52:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:31.352 00:05:31.352 real 0m14.229s 00:05:31.352 user 0m13.514s 00:05:31.352 sys 0m1.875s 00:05:31.352 11:52:08 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.352 11:52:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.352 ************************************ 00:05:31.352 END TEST skip_rpc 00:05:31.352 ************************************ 00:05:31.352 11:52:08 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.352 11:52:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.352 11:52:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.352 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.352 ************************************ 00:05:31.352 START TEST rpc_client 00:05:31.352 ************************************ 00:05:31.352 11:52:08 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.352 * Looking for test storage... 00:05:31.352 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:31.352 11:52:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.612 OK 00:05:31.612 11:52:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.612 00:05:31.612 real 0m0.136s 00:05:31.612 user 0m0.062s 00:05:31.612 sys 0m0.085s 00:05:31.612 11:52:08 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.612 11:52:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.612 ************************************ 00:05:31.612 END TEST rpc_client 00:05:31.612 ************************************ 00:05:31.612 11:52:08 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.612 11:52:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.612 11:52:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.612 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.612 ************************************ 00:05:31.612 START TEST json_config 00:05:31.612 ************************************ 00:05:31.612 11:52:08 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:31.612 11:52:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.612 11:52:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.612 11:52:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.612 11:52:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.612 11:52:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.612 11:52:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.612 11:52:08 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.612 11:52:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@47 -- # : 0 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.612 11:52:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:31.612 WARNING: No tests are enabled so not running JSON configuration tests 00:05:31.612 11:52:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:31.612 00:05:31.612 real 0m0.107s 00:05:31.612 user 0m0.050s 00:05:31.612 sys 0m0.058s 00:05:31.612 11:52:08 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.612 11:52:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.612 ************************************ 00:05:31.612 END TEST json_config 00:05:31.612 ************************************ 00:05:31.612 11:52:08 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.612 11:52:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.612 11:52:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.612 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.872 ************************************ 00:05:31.872 START TEST json_config_extra_key 00:05:31.872 ************************************ 00:05:31.872 11:52:08 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.872 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:05:31.872 11:52:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:31.873 11:52:09 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.873 11:52:09 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.873 11:52:09 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.873 11:52:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.873 11:52:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.873 11:52:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.873 11:52:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.873 11:52:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.873 11:52:09 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.873 INFO: launching applications... 00:05:31.873 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=893336 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.873 Waiting for target to run... 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 893336 /var/tmp/spdk_tgt.sock 00:05:31.873 11:52:09 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 893336 ']' 00:05:31.873 11:52:09 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.873 11:52:09 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.873 11:52:09 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.873 11:52:09 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.873 11:52:09 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.873 11:52:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.873 [2024-07-25 11:52:09.082950] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:31.873 [2024-07-25 11:52:09.083019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893336 ] 00:05:31.873 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.442 [2024-07-25 11:52:09.596412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.442 [2024-07-25 11:52:09.690945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.702 11:52:09 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.702 11:52:09 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:32.702 00:05:32.702 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:32.702 INFO: shutting down applications... 00:05:32.702 11:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 893336 ]] 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 893336 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 893336 00:05:32.702 11:52:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.268 11:52:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.268 11:52:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.268 11:52:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 893336 00:05:33.268 11:52:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.268 11:52:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:33.268 11:52:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.268 11:52:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.268 SPDK target shutdown done 00:05:33.268 11:52:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:33.268 Success 00:05:33.268 00:05:33.268 real 0m1.497s 00:05:33.268 user 0m1.052s 00:05:33.268 sys 0m0.641s 00:05:33.268 11:52:10 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.268 11:52:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.268 ************************************ 00:05:33.268 END TEST json_config_extra_key 00:05:33.268 ************************************ 00:05:33.268 11:52:10 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.268 11:52:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.268 11:52:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.268 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:33.268 ************************************ 00:05:33.268 START TEST alias_rpc 00:05:33.268 ************************************ 00:05:33.268 11:52:10 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.527 * Looking for test storage... 00:05:33.527 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:33.527 11:52:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.527 11:52:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=893575 00:05:33.527 11:52:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 893575 00:05:33.527 11:52:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.527 11:52:10 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 893575 ']' 00:05:33.527 11:52:10 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.527 11:52:10 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.527 11:52:10 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.527 11:52:10 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.527 11:52:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.527 [2024-07-25 11:52:10.663761] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:33.527 [2024-07-25 11:52:10.663837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893575 ] 00:05:33.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.527 [2024-07-25 11:52:10.747229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.786 [2024-07-25 11:52:10.835109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.355 11:52:11 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.355 11:52:11 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.355 11:52:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:34.613 11:52:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 893575 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 893575 ']' 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 893575 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893575 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893575' 00:05:34.613 killing process with pid 893575 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@969 -- # kill 893575 00:05:34.613 11:52:11 alias_rpc -- common/autotest_common.sh@974 -- # wait 893575 00:05:34.873 00:05:34.873 real 0m1.559s 00:05:34.873 user 0m1.640s 00:05:34.873 sys 0m0.482s 00:05:34.873 11:52:12 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.873 11:52:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 ************************************ 00:05:34.873 END TEST alias_rpc 00:05:34.873 ************************************ 00:05:34.873 11:52:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:34.873 11:52:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.873 11:52:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.873 11:52:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.873 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 ************************************ 00:05:34.873 START TEST spdkcli_tcp 00:05:34.873 ************************************ 00:05:34.873 11:52:12 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:35.131 * Looking for test storage... 00:05:35.131 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=893818 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 893818 00:05:35.131 11:52:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 893818 ']' 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.131 11:52:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.131 [2024-07-25 11:52:12.314987] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:35.131 [2024-07-25 11:52:12.315081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893818 ] 00:05:35.131 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.131 [2024-07-25 11:52:12.400596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.389 [2024-07-25 11:52:12.491216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.389 [2024-07-25 11:52:12.491216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.954 11:52:13 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.954 11:52:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:35.954 11:52:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=893993 00:05:35.954 11:52:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.954 11:52:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:36.217 [ 00:05:36.217 "spdk_get_version", 00:05:36.217 "rpc_get_methods", 00:05:36.217 "trace_get_info", 00:05:36.217 "trace_get_tpoint_group_mask", 00:05:36.217 "trace_disable_tpoint_group", 00:05:36.217 "trace_enable_tpoint_group", 00:05:36.217 "trace_clear_tpoint_mask", 00:05:36.217 "trace_set_tpoint_mask", 00:05:36.217 "vfu_tgt_set_base_path", 00:05:36.217 "framework_get_pci_devices", 00:05:36.217 "framework_get_config", 00:05:36.217 "framework_get_subsystems", 00:05:36.217 "keyring_get_keys", 00:05:36.217 "iobuf_get_stats", 00:05:36.217 "iobuf_set_options", 00:05:36.217 "sock_get_default_impl", 00:05:36.217 "sock_set_default_impl", 00:05:36.217 "sock_impl_set_options", 00:05:36.217 "sock_impl_get_options", 00:05:36.217 "vmd_rescan", 00:05:36.217 "vmd_remove_device", 00:05:36.217 "vmd_enable", 00:05:36.217 "accel_get_stats", 00:05:36.217 "accel_set_options", 00:05:36.217 "accel_set_driver", 00:05:36.217 "accel_crypto_key_destroy", 00:05:36.217 "accel_crypto_keys_get", 00:05:36.217 "accel_crypto_key_create", 00:05:36.217 "accel_assign_opc", 00:05:36.217 "accel_get_module_info", 00:05:36.217 "accel_get_opc_assignments", 00:05:36.217 "notify_get_notifications", 00:05:36.217 "notify_get_types", 00:05:36.217 "bdev_get_histogram", 00:05:36.217 "bdev_enable_histogram", 00:05:36.217 "bdev_set_qos_limit", 00:05:36.217 "bdev_set_qd_sampling_period", 00:05:36.217 "bdev_get_bdevs", 00:05:36.217 "bdev_reset_iostat", 00:05:36.217 "bdev_get_iostat", 00:05:36.217 "bdev_examine", 00:05:36.217 "bdev_wait_for_examine", 00:05:36.217 "bdev_set_options", 00:05:36.217 "scsi_get_devices", 00:05:36.217 "thread_set_cpumask", 00:05:36.217 "framework_get_governor", 00:05:36.217 "framework_get_scheduler", 00:05:36.217 "framework_set_scheduler", 00:05:36.217 "framework_get_reactors", 00:05:36.217 "thread_get_io_channels", 00:05:36.217 "thread_get_pollers", 00:05:36.217 "thread_get_stats", 00:05:36.217 "framework_monitor_context_switch", 00:05:36.217 "spdk_kill_instance", 00:05:36.217 "log_enable_timestamps", 00:05:36.217 "log_get_flags", 00:05:36.217 "log_clear_flag", 00:05:36.217 "log_set_flag", 00:05:36.217 "log_get_level", 00:05:36.217 "log_set_level", 00:05:36.217 "log_get_print_level", 00:05:36.217 "log_set_print_level", 00:05:36.217 "framework_enable_cpumask_locks", 00:05:36.217 "framework_disable_cpumask_locks", 00:05:36.217 "framework_wait_init", 00:05:36.217 "framework_start_init", 00:05:36.217 "virtio_blk_create_transport", 00:05:36.217 "virtio_blk_get_transports", 00:05:36.217 "vhost_controller_set_coalescing", 00:05:36.217 "vhost_get_controllers", 00:05:36.217 "vhost_delete_controller", 00:05:36.217 "vhost_create_blk_controller", 00:05:36.217 "vhost_scsi_controller_remove_target", 00:05:36.217 "vhost_scsi_controller_add_target", 00:05:36.217 "vhost_start_scsi_controller", 00:05:36.217 "vhost_create_scsi_controller", 00:05:36.217 "ublk_recover_disk", 00:05:36.217 "ublk_get_disks", 00:05:36.217 "ublk_stop_disk", 00:05:36.217 "ublk_start_disk", 00:05:36.217 "ublk_destroy_target", 00:05:36.217 "ublk_create_target", 00:05:36.217 "nbd_get_disks", 00:05:36.217 "nbd_stop_disk", 00:05:36.217 "nbd_start_disk", 00:05:36.217 "env_dpdk_get_mem_stats", 00:05:36.217 "nvmf_stop_mdns_prr", 00:05:36.217 "nvmf_publish_mdns_prr", 00:05:36.217 "nvmf_subsystem_get_listeners", 00:05:36.217 "nvmf_subsystem_get_qpairs", 00:05:36.217 "nvmf_subsystem_get_controllers", 00:05:36.217 "nvmf_get_stats", 00:05:36.217 "nvmf_get_transports", 00:05:36.217 "nvmf_create_transport", 00:05:36.217 "nvmf_get_targets", 00:05:36.217 "nvmf_delete_target", 00:05:36.217 "nvmf_create_target", 00:05:36.217 "nvmf_subsystem_allow_any_host", 00:05:36.217 "nvmf_subsystem_remove_host", 00:05:36.217 "nvmf_subsystem_add_host", 00:05:36.217 "nvmf_ns_remove_host", 00:05:36.217 "nvmf_ns_add_host", 00:05:36.217 "nvmf_subsystem_remove_ns", 00:05:36.217 "nvmf_subsystem_add_ns", 00:05:36.217 "nvmf_subsystem_listener_set_ana_state", 00:05:36.217 "nvmf_discovery_get_referrals", 00:05:36.217 "nvmf_discovery_remove_referral", 00:05:36.217 "nvmf_discovery_add_referral", 00:05:36.217 "nvmf_subsystem_remove_listener", 00:05:36.217 "nvmf_subsystem_add_listener", 00:05:36.217 "nvmf_delete_subsystem", 00:05:36.217 "nvmf_create_subsystem", 00:05:36.217 "nvmf_get_subsystems", 00:05:36.217 "nvmf_set_crdt", 00:05:36.217 "nvmf_set_config", 00:05:36.217 "nvmf_set_max_subsystems", 00:05:36.217 "iscsi_get_histogram", 00:05:36.217 "iscsi_enable_histogram", 00:05:36.217 "iscsi_set_options", 00:05:36.217 "iscsi_get_auth_groups", 00:05:36.217 "iscsi_auth_group_remove_secret", 00:05:36.217 "iscsi_auth_group_add_secret", 00:05:36.217 "iscsi_delete_auth_group", 00:05:36.217 "iscsi_create_auth_group", 00:05:36.217 "iscsi_set_discovery_auth", 00:05:36.217 "iscsi_get_options", 00:05:36.217 "iscsi_target_node_request_logout", 00:05:36.217 "iscsi_target_node_set_redirect", 00:05:36.217 "iscsi_target_node_set_auth", 00:05:36.217 "iscsi_target_node_add_lun", 00:05:36.217 "iscsi_get_stats", 00:05:36.217 "iscsi_get_connections", 00:05:36.217 "iscsi_portal_group_set_auth", 00:05:36.217 "iscsi_start_portal_group", 00:05:36.217 "iscsi_delete_portal_group", 00:05:36.217 "iscsi_create_portal_group", 00:05:36.217 "iscsi_get_portal_groups", 00:05:36.217 "iscsi_delete_target_node", 00:05:36.217 "iscsi_target_node_remove_pg_ig_maps", 00:05:36.217 "iscsi_target_node_add_pg_ig_maps", 00:05:36.217 "iscsi_create_target_node", 00:05:36.217 "iscsi_get_target_nodes", 00:05:36.217 "iscsi_delete_initiator_group", 00:05:36.217 "iscsi_initiator_group_remove_initiators", 00:05:36.217 "iscsi_initiator_group_add_initiators", 00:05:36.217 "iscsi_create_initiator_group", 00:05:36.217 "iscsi_get_initiator_groups", 00:05:36.217 "keyring_linux_set_options", 00:05:36.217 "keyring_file_remove_key", 00:05:36.217 "keyring_file_add_key", 00:05:36.217 "vfu_virtio_create_scsi_endpoint", 00:05:36.218 "vfu_virtio_scsi_remove_target", 00:05:36.218 "vfu_virtio_scsi_add_target", 00:05:36.218 "vfu_virtio_create_blk_endpoint", 00:05:36.218 "vfu_virtio_delete_endpoint", 00:05:36.218 "iaa_scan_accel_module", 00:05:36.218 "dsa_scan_accel_module", 00:05:36.218 "ioat_scan_accel_module", 00:05:36.218 "accel_error_inject_error", 00:05:36.218 "bdev_iscsi_delete", 00:05:36.218 "bdev_iscsi_create", 00:05:36.218 "bdev_iscsi_set_options", 00:05:36.218 "bdev_virtio_attach_controller", 00:05:36.218 "bdev_virtio_scsi_get_devices", 00:05:36.218 "bdev_virtio_detach_controller", 00:05:36.218 "bdev_virtio_blk_set_hotplug", 00:05:36.218 "bdev_ftl_set_property", 00:05:36.218 "bdev_ftl_get_properties", 00:05:36.218 "bdev_ftl_get_stats", 00:05:36.218 "bdev_ftl_unmap", 00:05:36.218 "bdev_ftl_unload", 00:05:36.218 "bdev_ftl_delete", 00:05:36.218 "bdev_ftl_load", 00:05:36.218 "bdev_ftl_create", 00:05:36.218 "bdev_aio_delete", 00:05:36.218 "bdev_aio_rescan", 00:05:36.218 "bdev_aio_create", 00:05:36.218 "blobfs_create", 00:05:36.218 "blobfs_detect", 00:05:36.218 "blobfs_set_cache_size", 00:05:36.218 "bdev_zone_block_delete", 00:05:36.218 "bdev_zone_block_create", 00:05:36.218 "bdev_delay_delete", 00:05:36.218 "bdev_delay_create", 00:05:36.218 "bdev_delay_update_latency", 00:05:36.218 "bdev_split_delete", 00:05:36.218 "bdev_split_create", 00:05:36.218 "bdev_error_inject_error", 00:05:36.218 "bdev_error_delete", 00:05:36.218 "bdev_error_create", 00:05:36.218 "bdev_raid_set_options", 00:05:36.218 "bdev_raid_remove_base_bdev", 00:05:36.218 "bdev_raid_add_base_bdev", 00:05:36.218 "bdev_raid_delete", 00:05:36.218 "bdev_raid_create", 00:05:36.218 "bdev_raid_get_bdevs", 00:05:36.218 "bdev_lvol_set_parent_bdev", 00:05:36.218 "bdev_lvol_set_parent", 00:05:36.218 "bdev_lvol_check_shallow_copy", 00:05:36.218 "bdev_lvol_start_shallow_copy", 00:05:36.218 "bdev_lvol_grow_lvstore", 00:05:36.218 "bdev_lvol_get_lvols", 00:05:36.218 "bdev_lvol_get_lvstores", 00:05:36.218 "bdev_lvol_delete", 00:05:36.218 "bdev_lvol_set_read_only", 00:05:36.218 "bdev_lvol_resize", 00:05:36.218 "bdev_lvol_decouple_parent", 00:05:36.218 "bdev_lvol_inflate", 00:05:36.218 "bdev_lvol_rename", 00:05:36.218 "bdev_lvol_clone_bdev", 00:05:36.218 "bdev_lvol_clone", 00:05:36.218 "bdev_lvol_snapshot", 00:05:36.218 "bdev_lvol_create", 00:05:36.218 "bdev_lvol_delete_lvstore", 00:05:36.218 "bdev_lvol_rename_lvstore", 00:05:36.218 "bdev_lvol_create_lvstore", 00:05:36.218 "bdev_passthru_delete", 00:05:36.218 "bdev_passthru_create", 00:05:36.218 "bdev_nvme_cuse_unregister", 00:05:36.218 "bdev_nvme_cuse_register", 00:05:36.218 "bdev_opal_new_user", 00:05:36.218 "bdev_opal_set_lock_state", 00:05:36.218 "bdev_opal_delete", 00:05:36.218 "bdev_opal_get_info", 00:05:36.218 "bdev_opal_create", 00:05:36.218 "bdev_nvme_opal_revert", 00:05:36.218 "bdev_nvme_opal_init", 00:05:36.218 "bdev_nvme_send_cmd", 00:05:36.218 "bdev_nvme_get_path_iostat", 00:05:36.218 "bdev_nvme_get_mdns_discovery_info", 00:05:36.218 "bdev_nvme_stop_mdns_discovery", 00:05:36.218 "bdev_nvme_start_mdns_discovery", 00:05:36.218 "bdev_nvme_set_multipath_policy", 00:05:36.218 "bdev_nvme_set_preferred_path", 00:05:36.218 "bdev_nvme_get_io_paths", 00:05:36.218 "bdev_nvme_remove_error_injection", 00:05:36.218 "bdev_nvme_add_error_injection", 00:05:36.218 "bdev_nvme_get_discovery_info", 00:05:36.218 "bdev_nvme_stop_discovery", 00:05:36.218 "bdev_nvme_start_discovery", 00:05:36.218 "bdev_nvme_get_controller_health_info", 00:05:36.218 "bdev_nvme_disable_controller", 00:05:36.218 "bdev_nvme_enable_controller", 00:05:36.218 "bdev_nvme_reset_controller", 00:05:36.218 "bdev_nvme_get_transport_statistics", 00:05:36.218 "bdev_nvme_apply_firmware", 00:05:36.218 "bdev_nvme_detach_controller", 00:05:36.218 "bdev_nvme_get_controllers", 00:05:36.218 "bdev_nvme_attach_controller", 00:05:36.218 "bdev_nvme_set_hotplug", 00:05:36.218 "bdev_nvme_set_options", 00:05:36.218 "bdev_null_resize", 00:05:36.218 "bdev_null_delete", 00:05:36.218 "bdev_null_create", 00:05:36.218 "bdev_malloc_delete", 00:05:36.218 "bdev_malloc_create" 00:05:36.218 ] 00:05:36.218 11:52:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.218 11:52:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:36.218 11:52:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 893818 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 893818 ']' 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 893818 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893818 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893818' 00:05:36.218 killing process with pid 893818 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 893818 00:05:36.218 11:52:13 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 893818 00:05:36.477 00:05:36.477 real 0m1.568s 00:05:36.477 user 0m2.813s 00:05:36.477 sys 0m0.535s 00:05:36.477 11:52:13 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.477 11:52:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.477 ************************************ 00:05:36.477 END TEST spdkcli_tcp 00:05:36.477 ************************************ 00:05:36.734 11:52:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.734 11:52:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.734 11:52:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.734 11:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:36.734 ************************************ 00:05:36.734 START TEST dpdk_mem_utility 00:05:36.734 ************************************ 00:05:36.734 11:52:13 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.734 * Looking for test storage... 00:05:36.734 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:36.734 11:52:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.734 11:52:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=894072 00:05:36.734 11:52:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 894072 00:05:36.734 11:52:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.734 11:52:13 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 894072 ']' 00:05:36.734 11:52:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.734 11:52:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.734 11:52:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.734 11:52:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.734 11:52:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.734 [2024-07-25 11:52:13.962061] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:36.734 [2024-07-25 11:52:13.962132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894072 ] 00:05:36.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.991 [2024-07-25 11:52:14.046516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.991 [2024-07-25 11:52:14.134657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.557 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.557 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:37.557 11:52:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.557 11:52:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.557 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.557 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.557 { 00:05:37.557 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.557 } 00:05:37.557 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.557 11:52:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.557 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:37.557 1 heaps totaling size 814.000000 MiB 00:05:37.557 size: 814.000000 MiB heap id: 0 00:05:37.557 end heaps---------- 00:05:37.557 8 mempools totaling size 598.116089 MiB 00:05:37.557 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.557 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.557 size: 84.521057 MiB name: bdev_io_894072 00:05:37.557 size: 51.011292 MiB name: evtpool_894072 00:05:37.557 size: 50.003479 MiB name: msgpool_894072 00:05:37.557 size: 21.763794 MiB name: PDU_Pool 00:05:37.557 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.557 size: 0.026123 MiB name: Session_Pool 00:05:37.557 end mempools------- 00:05:37.557 6 memzones totaling size 4.142822 MiB 00:05:37.557 size: 1.000366 MiB name: RG_ring_0_894072 00:05:37.557 size: 1.000366 MiB name: RG_ring_1_894072 00:05:37.557 size: 1.000366 MiB name: RG_ring_4_894072 00:05:37.557 size: 1.000366 MiB name: RG_ring_5_894072 00:05:37.557 size: 0.125366 MiB name: RG_ring_2_894072 00:05:37.557 size: 0.015991 MiB name: RG_ring_3_894072 00:05:37.557 end memzones------- 00:05:37.557 11:52:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.815 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:37.815 list of free elements. size: 12.519348 MiB 00:05:37.815 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:37.815 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:37.815 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:37.815 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:37.815 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:37.815 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:37.815 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:37.815 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:37.815 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:37.815 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:37.815 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:37.815 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:37.815 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:37.815 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:37.815 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:37.815 list of standard malloc elements. size: 199.218079 MiB 00:05:37.815 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:37.815 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:37.815 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:37.815 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:37.815 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:37.815 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.815 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:37.815 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.815 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:37.815 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.815 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:37.815 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:37.815 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:37.815 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:37.815 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:37.815 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:37.815 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:37.815 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:37.815 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:37.815 list of memzone associated elements. size: 602.262573 MiB 00:05:37.815 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:37.815 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.815 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:37.815 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.815 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:37.815 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_894072_0 00:05:37.815 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:37.815 associated memzone info: size: 48.002930 MiB name: MP_evtpool_894072_0 00:05:37.815 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:37.815 associated memzone info: size: 48.002930 MiB name: MP_msgpool_894072_0 00:05:37.815 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:37.815 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.815 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:37.815 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.815 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:37.815 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_894072 00:05:37.815 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:37.815 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_894072 00:05:37.815 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.815 associated memzone info: size: 1.007996 MiB name: MP_evtpool_894072 00:05:37.815 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:37.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.815 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:37.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.815 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:37.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.815 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:37.815 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.815 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:37.815 associated memzone info: size: 1.000366 MiB name: RG_ring_0_894072 00:05:37.815 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:37.815 associated memzone info: size: 1.000366 MiB name: RG_ring_1_894072 00:05:37.815 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:37.815 associated memzone info: size: 1.000366 MiB name: RG_ring_4_894072 00:05:37.815 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:37.815 associated memzone info: size: 1.000366 MiB name: RG_ring_5_894072 00:05:37.815 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:37.815 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_894072 00:05:37.815 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:37.815 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.815 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:37.815 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.815 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:37.815 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.815 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:37.815 associated memzone info: size: 0.125366 MiB name: RG_ring_2_894072 00:05:37.815 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:37.815 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.815 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:37.815 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.815 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:37.815 associated memzone info: size: 0.015991 MiB name: RG_ring_3_894072 00:05:37.815 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:37.815 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.815 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:37.815 associated memzone info: size: 0.000183 MiB name: MP_msgpool_894072 00:05:37.815 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:37.815 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_894072 00:05:37.815 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:37.815 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.815 11:52:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.815 11:52:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 894072 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 894072 ']' 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 894072 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 894072 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 894072' 00:05:37.815 killing process with pid 894072 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 894072 00:05:37.815 11:52:14 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 894072 00:05:38.073 00:05:38.073 real 0m1.452s 00:05:38.073 user 0m1.439s 00:05:38.073 sys 0m0.485s 00:05:38.073 11:52:15 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.073 11:52:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.073 ************************************ 00:05:38.073 END TEST dpdk_mem_utility 00:05:38.073 ************************************ 00:05:38.073 11:52:15 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:38.073 11:52:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.073 11:52:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.073 11:52:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.073 ************************************ 00:05:38.073 START TEST event 00:05:38.073 ************************************ 00:05:38.073 11:52:15 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:38.332 * Looking for test storage... 00:05:38.332 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:38.332 11:52:15 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:38.332 11:52:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:38.332 11:52:15 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.332 11:52:15 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:38.332 11:52:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.332 11:52:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.332 ************************************ 00:05:38.332 START TEST event_perf 00:05:38.332 ************************************ 00:05:38.332 11:52:15 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.332 Running I/O for 1 seconds...[2024-07-25 11:52:15.535353] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:38.332 [2024-07-25 11:52:15.535441] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894368 ] 00:05:38.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.332 [2024-07-25 11:52:15.623767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.590 [2024-07-25 11:52:15.709629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.590 [2024-07-25 11:52:15.709729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.590 [2024-07-25 11:52:15.709815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.590 [2024-07-25 11:52:15.709815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.527 Running I/O for 1 seconds... 00:05:39.527 lcore 0: 188307 00:05:39.527 lcore 1: 188305 00:05:39.527 lcore 2: 188306 00:05:39.527 lcore 3: 188306 00:05:39.527 done. 00:05:39.527 00:05:39.527 real 0m1.267s 00:05:39.527 user 0m4.158s 00:05:39.527 sys 0m0.106s 00:05:39.527 11:52:16 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.527 11:52:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.527 ************************************ 00:05:39.527 END TEST event_perf 00:05:39.527 ************************************ 00:05:39.527 11:52:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.527 11:52:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:39.527 11:52:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.527 11:52:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.787 ************************************ 00:05:39.787 START TEST event_reactor 00:05:39.787 ************************************ 00:05:39.787 11:52:16 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.787 [2024-07-25 11:52:16.889022] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:39.787 [2024-07-25 11:52:16.889105] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894559 ] 00:05:39.787 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.787 [2024-07-25 11:52:16.978625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.787 [2024-07-25 11:52:17.059706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.163 test_start 00:05:41.163 oneshot 00:05:41.163 tick 100 00:05:41.163 tick 100 00:05:41.163 tick 250 00:05:41.163 tick 100 00:05:41.163 tick 100 00:05:41.163 tick 100 00:05:41.163 tick 250 00:05:41.163 tick 500 00:05:41.163 tick 100 00:05:41.163 tick 100 00:05:41.163 tick 250 00:05:41.163 tick 100 00:05:41.163 tick 100 00:05:41.163 test_end 00:05:41.163 00:05:41.163 real 0m1.257s 00:05:41.163 user 0m1.146s 00:05:41.163 sys 0m0.107s 00:05:41.163 11:52:18 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.163 11:52:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:41.163 ************************************ 00:05:41.163 END TEST event_reactor 00:05:41.163 ************************************ 00:05:41.163 11:52:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.163 11:52:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:41.163 11:52:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.163 11:52:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.163 ************************************ 00:05:41.163 START TEST event_reactor_perf 00:05:41.163 ************************************ 00:05:41.163 11:52:18 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.163 [2024-07-25 11:52:18.230517] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:41.163 [2024-07-25 11:52:18.230602] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894738 ] 00:05:41.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.163 [2024-07-25 11:52:18.319850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.163 [2024-07-25 11:52:18.405654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.534 test_start 00:05:42.534 test_end 00:05:42.534 Performance: 933302 events per second 00:05:42.534 00:05:42.534 real 0m1.267s 00:05:42.534 user 0m1.156s 00:05:42.534 sys 0m0.106s 00:05:42.534 11:52:19 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.534 11:52:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.534 ************************************ 00:05:42.534 END TEST event_reactor_perf 00:05:42.534 ************************************ 00:05:42.534 11:52:19 event -- event/event.sh@49 -- # uname -s 00:05:42.535 11:52:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:42.535 11:52:19 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.535 11:52:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.535 11:52:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.535 11:52:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.535 ************************************ 00:05:42.535 START TEST event_scheduler 00:05:42.535 ************************************ 00:05:42.535 11:52:19 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.535 * Looking for test storage... 00:05:42.535 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:42.535 11:52:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:42.535 11:52:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=895004 00:05:42.535 11:52:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.535 11:52:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:42.535 11:52:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 895004 00:05:42.535 11:52:19 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 895004 ']' 00:05:42.535 11:52:19 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.535 11:52:19 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.535 11:52:19 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.535 11:52:19 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.535 11:52:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.535 [2024-07-25 11:52:19.708671] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:42.535 [2024-07-25 11:52:19.708763] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895004 ] 00:05:42.535 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.535 [2024-07-25 11:52:19.796263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.794 [2024-07-25 11:52:19.893576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.794 [2024-07-25 11:52:19.893699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.794 [2024-07-25 11:52:19.893677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.794 [2024-07-25 11:52:19.893699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:43.362 11:52:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.362 [2024-07-25 11:52:20.560450] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:43.362 [2024-07-25 11:52:20.560481] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:43.362 [2024-07-25 11:52:20.560493] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:43.362 [2024-07-25 11:52:20.560501] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:43.362 [2024-07-25 11:52:20.560508] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.362 11:52:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.362 [2024-07-25 11:52:20.635466] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.362 11:52:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.362 11:52:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 ************************************ 00:05:43.630 START TEST scheduler_create_thread 00:05:43.630 ************************************ 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 2 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 3 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 4 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 5 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 6 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 7 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 8 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 9 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 10 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.630 11:52:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.263 11:52:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.263 11:52:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:44.263 11:52:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.263 11:52:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.693 11:52:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.693 11:52:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.693 11:52:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.693 11:52:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.693 11:52:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.628 11:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.628 00:05:46.628 real 0m3.100s 00:05:46.628 user 0m0.018s 00:05:46.628 sys 0m0.013s 00:05:46.628 11:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.628 11:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.628 ************************************ 00:05:46.628 END TEST scheduler_create_thread 00:05:46.628 ************************************ 00:05:46.628 11:52:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.628 11:52:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 895004 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 895004 ']' 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 895004 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 895004 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 895004' 00:05:46.628 killing process with pid 895004 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 895004 00:05:46.628 11:52:23 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 895004 00:05:46.887 [2024-07-25 11:52:24.158506] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.146 00:05:47.146 real 0m4.814s 00:05:47.146 user 0m9.267s 00:05:47.146 sys 0m0.489s 00:05:47.146 11:52:24 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.146 11:52:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.146 ************************************ 00:05:47.146 END TEST event_scheduler 00:05:47.146 ************************************ 00:05:47.146 11:52:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.146 11:52:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.146 11:52:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.146 11:52:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.146 11:52:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.406 ************************************ 00:05:47.406 START TEST app_repeat 00:05:47.406 ************************************ 00:05:47.406 11:52:24 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=895728 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 895728' 00:05:47.406 Process app_repeat pid: 895728 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.406 spdk_app_start Round 0 00:05:47.406 11:52:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 895728 /var/tmp/spdk-nbd.sock 00:05:47.406 11:52:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 895728 ']' 00:05:47.406 11:52:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.406 11:52:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.406 11:52:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.406 11:52:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.406 11:52:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.406 [2024-07-25 11:52:24.511424] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:47.406 [2024-07-25 11:52:24.511513] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895728 ] 00:05:47.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.406 [2024-07-25 11:52:24.599267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.406 [2024-07-25 11:52:24.686966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.406 [2024-07-25 11:52:24.686966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.343 11:52:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.343 11:52:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:48.344 11:52:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.344 Malloc0 00:05:48.344 11:52:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.603 Malloc1 00:05:48.603 11:52:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.603 11:52:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.861 /dev/nbd0 00:05:48.861 11:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.861 11:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.861 1+0 records in 00:05:48.861 1+0 records out 00:05:48.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240754 s, 17.0 MB/s 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.861 11:52:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:48.861 11:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.861 11:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.861 11:52:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.861 /dev/nbd1 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.120 1+0 records in 00:05:49.120 1+0 records out 00:05:49.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273822 s, 15.0 MB/s 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.120 11:52:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.120 { 00:05:49.120 "nbd_device": "/dev/nbd0", 00:05:49.120 "bdev_name": "Malloc0" 00:05:49.120 }, 00:05:49.120 { 00:05:49.120 "nbd_device": "/dev/nbd1", 00:05:49.120 "bdev_name": "Malloc1" 00:05:49.120 } 00:05:49.120 ]' 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.120 { 00:05:49.120 "nbd_device": "/dev/nbd0", 00:05:49.120 "bdev_name": "Malloc0" 00:05:49.120 }, 00:05:49.120 { 00:05:49.120 "nbd_device": "/dev/nbd1", 00:05:49.120 "bdev_name": "Malloc1" 00:05:49.120 } 00:05:49.120 ]' 00:05:49.120 11:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.379 /dev/nbd1' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.379 /dev/nbd1' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.379 256+0 records in 00:05:49.379 256+0 records out 00:05:49.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011434 s, 91.7 MB/s 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.379 256+0 records in 00:05:49.379 256+0 records out 00:05:49.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214418 s, 48.9 MB/s 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.379 256+0 records in 00:05:49.379 256+0 records out 00:05:49.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227926 s, 46.0 MB/s 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.379 11:52:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.637 11:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.896 11:52:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.896 11:52:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.155 11:52:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.415 [2024-07-25 11:52:27.534658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.415 [2024-07-25 11:52:27.614272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.415 [2024-07-25 11:52:27.614272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.415 [2024-07-25 11:52:27.661078] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.415 [2024-07-25 11:52:27.661130] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.703 11:52:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.703 11:52:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.703 spdk_app_start Round 1 00:05:53.703 11:52:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 895728 /var/tmp/spdk-nbd.sock 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 895728 ']' 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.703 11:52:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:53.704 11:52:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.704 Malloc0 00:05:53.704 11:52:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.704 Malloc1 00:05:53.704 11:52:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.704 11:52:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.962 /dev/nbd0 00:05:53.962 11:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.962 11:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.962 1+0 records in 00:05:53.962 1+0 records out 00:05:53.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279382 s, 14.7 MB/s 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:53.962 11:52:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:53.962 11:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.962 11:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.962 11:52:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.220 /dev/nbd1 00:05:54.220 11:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.220 11:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.220 1+0 records in 00:05:54.220 1+0 records out 00:05:54.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272656 s, 15.0 MB/s 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.220 11:52:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.220 11:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.220 11:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.220 11:52:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.220 11:52:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.220 11:52:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.478 { 00:05:54.478 "nbd_device": "/dev/nbd0", 00:05:54.478 "bdev_name": "Malloc0" 00:05:54.478 }, 00:05:54.478 { 00:05:54.478 "nbd_device": "/dev/nbd1", 00:05:54.478 "bdev_name": "Malloc1" 00:05:54.478 } 00:05:54.478 ]' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.478 { 00:05:54.478 "nbd_device": "/dev/nbd0", 00:05:54.478 "bdev_name": "Malloc0" 00:05:54.478 }, 00:05:54.478 { 00:05:54.478 "nbd_device": "/dev/nbd1", 00:05:54.478 "bdev_name": "Malloc1" 00:05:54.478 } 00:05:54.478 ]' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.478 /dev/nbd1' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.478 /dev/nbd1' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.478 256+0 records in 00:05:54.478 256+0 records out 00:05:54.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00350714 s, 299 MB/s 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.478 256+0 records in 00:05:54.478 256+0 records out 00:05:54.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02085 s, 50.3 MB/s 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.478 256+0 records in 00:05:54.478 256+0 records out 00:05:54.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228448 s, 45.9 MB/s 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.478 11:52:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.736 11:52:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.737 11:52:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.995 11:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.254 11:52:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.254 11:52:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.514 11:52:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.514 [2024-07-25 11:52:32.744915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.773 [2024-07-25 11:52:32.826454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.773 [2024-07-25 11:52:32.826455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.773 [2024-07-25 11:52:32.874444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.773 [2024-07-25 11:52:32.874495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.305 11:52:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.305 11:52:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:58.305 spdk_app_start Round 2 00:05:58.305 11:52:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 895728 /var/tmp/spdk-nbd.sock 00:05:58.305 11:52:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 895728 ']' 00:05:58.305 11:52:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.305 11:52:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.305 11:52:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.305 11:52:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.305 11:52:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.564 11:52:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.564 11:52:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:58.564 11:52:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.823 Malloc0 00:05:58.823 11:52:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.823 Malloc1 00:05:59.082 11:52:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.082 11:52:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.082 11:52:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.082 11:52:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.082 11:52:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.082 11:52:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.083 /dev/nbd0 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.083 1+0 records in 00:05:59.083 1+0 records out 00:05:59.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236147 s, 17.3 MB/s 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.083 11:52:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.083 11:52:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.342 /dev/nbd1 00:05:59.342 11:52:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.342 11:52:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.342 1+0 records in 00:05:59.342 1+0 records out 00:05:59.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265426 s, 15.4 MB/s 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.342 11:52:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.342 11:52:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.342 11:52:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.342 11:52:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.342 11:52:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.342 11:52:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.601 { 00:05:59.601 "nbd_device": "/dev/nbd0", 00:05:59.601 "bdev_name": "Malloc0" 00:05:59.601 }, 00:05:59.601 { 00:05:59.601 "nbd_device": "/dev/nbd1", 00:05:59.601 "bdev_name": "Malloc1" 00:05:59.601 } 00:05:59.601 ]' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.601 { 00:05:59.601 "nbd_device": "/dev/nbd0", 00:05:59.601 "bdev_name": "Malloc0" 00:05:59.601 }, 00:05:59.601 { 00:05:59.601 "nbd_device": "/dev/nbd1", 00:05:59.601 "bdev_name": "Malloc1" 00:05:59.601 } 00:05:59.601 ]' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.601 /dev/nbd1' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.601 /dev/nbd1' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.601 256+0 records in 00:05:59.601 256+0 records out 00:05:59.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113061 s, 92.7 MB/s 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.601 256+0 records in 00:05:59.601 256+0 records out 00:05:59.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212699 s, 49.3 MB/s 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.601 256+0 records in 00:05:59.601 256+0 records out 00:05:59.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227604 s, 46.1 MB/s 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.601 11:52:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.860 11:52:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.860 11:52:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.860 11:52:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.861 11:52:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.861 11:52:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.861 11:52:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.861 11:52:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.861 11:52:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.119 11:52:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.378 11:52:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.378 11:52:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.637 11:52:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.637 [2024-07-25 11:52:37.926342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.896 [2024-07-25 11:52:38.005590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.896 [2024-07-25 11:52:38.005590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.896 [2024-07-25 11:52:38.052522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.896 [2024-07-25 11:52:38.052573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.185 11:52:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 895728 /var/tmp/spdk-nbd.sock 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 895728 ']' 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.185 11:52:40 event.app_repeat -- event/event.sh@39 -- # killprocess 895728 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 895728 ']' 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 895728 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.185 11:52:40 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 895728 00:06:04.185 11:52:41 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.185 11:52:41 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.185 11:52:41 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 895728' 00:06:04.185 killing process with pid 895728 00:06:04.185 11:52:41 event.app_repeat -- common/autotest_common.sh@969 -- # kill 895728 00:06:04.185 11:52:41 event.app_repeat -- common/autotest_common.sh@974 -- # wait 895728 00:06:04.185 spdk_app_start is called in Round 0. 00:06:04.185 Shutdown signal received, stop current app iteration 00:06:04.185 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:06:04.185 spdk_app_start is called in Round 1. 00:06:04.185 Shutdown signal received, stop current app iteration 00:06:04.185 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:06:04.185 spdk_app_start is called in Round 2. 00:06:04.185 Shutdown signal received, stop current app iteration 00:06:04.185 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:06:04.185 spdk_app_start is called in Round 3. 00:06:04.185 Shutdown signal received, stop current app iteration 00:06:04.185 11:52:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.185 11:52:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.185 00:06:04.185 real 0m16.697s 00:06:04.185 user 0m35.547s 00:06:04.185 sys 0m3.339s 00:06:04.185 11:52:41 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.185 11:52:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.185 ************************************ 00:06:04.185 END TEST app_repeat 00:06:04.185 ************************************ 00:06:04.185 11:52:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.185 11:52:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.185 11:52:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.185 11:52:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.185 11:52:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.185 ************************************ 00:06:04.185 START TEST cpu_locks 00:06:04.185 ************************************ 00:06:04.185 11:52:41 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.185 * Looking for test storage... 00:06:04.185 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:04.185 11:52:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.185 11:52:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.185 11:52:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.185 11:52:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.185 11:52:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.185 11:52:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.185 11:52:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.185 ************************************ 00:06:04.185 START TEST default_locks 00:06:04.185 ************************************ 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=898151 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 898151 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 898151 ']' 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.185 11:52:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.185 [2024-07-25 11:52:41.436178] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:04.185 [2024-07-25 11:52:41.436241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898151 ] 00:06:04.185 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.445 [2024-07-25 11:52:41.516640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.445 [2024-07-25 11:52:41.595625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.073 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.073 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:05.073 11:52:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 898151 00:06:05.073 11:52:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 898151 00:06:05.073 11:52:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.641 lslocks: write error 00:06:05.641 11:52:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 898151 00:06:05.641 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 898151 ']' 00:06:05.641 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 898151 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898151 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898151' 00:06:05.642 killing process with pid 898151 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 898151 00:06:05.642 11:52:42 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 898151 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 898151 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 898151 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 898151 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 898151 ']' 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.901 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (898151) - No such process 00:06:05.901 ERROR: process (pid: 898151) is no longer running 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.901 00:06:05.901 real 0m1.631s 00:06:05.901 user 0m1.667s 00:06:05.901 sys 0m0.603s 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.901 11:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.901 ************************************ 00:06:05.901 END TEST default_locks 00:06:05.901 ************************************ 00:06:05.902 11:52:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.902 11:52:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.902 11:52:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.902 11:52:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.902 ************************************ 00:06:05.902 START TEST default_locks_via_rpc 00:06:05.902 ************************************ 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=898376 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 898376 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 898376 ']' 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.902 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.902 [2024-07-25 11:52:43.150352] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:05.902 [2024-07-25 11:52:43.150437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898376 ] 00:06:05.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.161 [2024-07-25 11:52:43.235828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.161 [2024-07-25 11:52:43.325870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.729 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 898376 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 898376 00:06:06.730 11:52:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 898376 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 898376 ']' 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 898376 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898376 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898376' 00:06:07.298 killing process with pid 898376 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 898376 00:06:07.298 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 898376 00:06:07.557 00:06:07.558 real 0m1.622s 00:06:07.558 user 0m1.667s 00:06:07.558 sys 0m0.588s 00:06:07.558 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.558 11:52:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.558 ************************************ 00:06:07.558 END TEST default_locks_via_rpc 00:06:07.558 ************************************ 00:06:07.558 11:52:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.558 11:52:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.558 11:52:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.558 11:52:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.558 ************************************ 00:06:07.558 START TEST non_locking_app_on_locked_coremask 00:06:07.558 ************************************ 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=898724 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 898724 /var/tmp/spdk.sock 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 898724 ']' 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.558 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.558 [2024-07-25 11:52:44.859478] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:07.558 [2024-07-25 11:52:44.859558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898724 ] 00:06:07.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.817 [2024-07-25 11:52:44.946278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.817 [2024-07-25 11:52:45.026698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=898766 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 898766 /var/tmp/spdk2.sock 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 898766 ']' 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.755 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.755 [2024-07-25 11:52:45.710339] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:08.755 [2024-07-25 11:52:45.710403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898766 ] 00:06:08.755 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.755 [2024-07-25 11:52:45.805956] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.755 [2024-07-25 11:52:45.805986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.755 [2024-07-25 11:52:45.966859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.324 11:52:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.324 11:52:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.324 11:52:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 898724 00:06:09.324 11:52:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.324 11:52:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 898724 00:06:10.261 lslocks: write error 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 898724 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 898724 ']' 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 898724 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898724 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898724' 00:06:10.261 killing process with pid 898724 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 898724 00:06:10.261 11:52:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 898724 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 898766 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 898766 ']' 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 898766 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898766 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898766' 00:06:11.201 killing process with pid 898766 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 898766 00:06:11.201 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 898766 00:06:11.461 00:06:11.461 real 0m3.760s 00:06:11.461 user 0m3.966s 00:06:11.461 sys 0m1.241s 00:06:11.461 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.461 11:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.461 ************************************ 00:06:11.461 END TEST non_locking_app_on_locked_coremask 00:06:11.461 ************************************ 00:06:11.461 11:52:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.461 11:52:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.461 11:52:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.461 11:52:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.461 ************************************ 00:06:11.461 START TEST locking_app_on_unlocked_coremask 00:06:11.461 ************************************ 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=899173 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 899173 /var/tmp/spdk.sock 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 899173 ']' 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.461 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.461 [2024-07-25 11:52:48.708344] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:11.461 [2024-07-25 11:52:48.708413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899173 ] 00:06:11.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.721 [2024-07-25 11:52:48.795233] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.721 [2024-07-25 11:52:48.795266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.721 [2024-07-25 11:52:48.883783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=899351 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 899351 /var/tmp/spdk2.sock 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 899351 ']' 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.289 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.289 [2024-07-25 11:52:49.566050] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:12.289 [2024-07-25 11:52:49.566099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899351 ] 00:06:12.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.549 [2024-07-25 11:52:49.656801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.549 [2024-07-25 11:52:49.816332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.117 11:52:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.117 11:52:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.117 11:52:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 899351 00:06:13.117 11:52:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 899351 00:06:13.117 11:52:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.495 lslocks: write error 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 899173 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 899173 ']' 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 899173 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 899173 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 899173' 00:06:14.495 killing process with pid 899173 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 899173 00:06:14.495 11:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 899173 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 899351 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 899351 ']' 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 899351 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 899351 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 899351' 00:06:15.064 killing process with pid 899351 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 899351 00:06:15.064 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 899351 00:06:15.324 00:06:15.324 real 0m3.836s 00:06:15.324 user 0m4.033s 00:06:15.324 sys 0m1.285s 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.324 ************************************ 00:06:15.324 END TEST locking_app_on_unlocked_coremask 00:06:15.324 ************************************ 00:06:15.324 11:52:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.324 11:52:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.324 11:52:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.324 11:52:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.324 ************************************ 00:06:15.324 START TEST locking_app_on_locked_coremask 00:06:15.324 ************************************ 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=899753 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 899753 /var/tmp/spdk.sock 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 899753 ']' 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.324 11:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.584 [2024-07-25 11:52:52.632354] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:15.584 [2024-07-25 11:52:52.632422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899753 ] 00:06:15.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.584 [2024-07-25 11:52:52.718457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.584 [2024-07-25 11:52:52.804956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=899935 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 899935 /var/tmp/spdk2.sock 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 899935 /var/tmp/spdk2.sock 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 899935 /var/tmp/spdk2.sock 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 899935 ']' 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.521 11:52:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.521 [2024-07-25 11:52:53.494568] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:16.521 [2024-07-25 11:52:53.494646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899935 ] 00:06:16.521 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.521 [2024-07-25 11:52:53.585340] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 899753 has claimed it. 00:06:16.521 [2024-07-25 11:52:53.585378] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.090 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (899935) - No such process 00:06:17.090 ERROR: process (pid: 899935) is no longer running 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 899753 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 899753 00:06:17.090 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.659 lslocks: write error 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 899753 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 899753 ']' 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 899753 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 899753 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 899753' 00:06:17.659 killing process with pid 899753 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 899753 00:06:17.659 11:52:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 899753 00:06:17.919 00:06:17.919 real 0m2.531s 00:06:17.919 user 0m2.728s 00:06:17.919 sys 0m0.786s 00:06:17.919 11:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.919 11:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.919 ************************************ 00:06:17.919 END TEST locking_app_on_locked_coremask 00:06:17.919 ************************************ 00:06:17.919 11:52:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:17.919 11:52:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.919 11:52:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.919 11:52:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.178 ************************************ 00:06:18.178 START TEST locking_overlapped_coremask 00:06:18.178 ************************************ 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=900152 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 900152 /var/tmp/spdk.sock 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 900152 ']' 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.178 11:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.178 [2024-07-25 11:52:55.250537] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:18.178 [2024-07-25 11:52:55.250616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900152 ] 00:06:18.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.178 [2024-07-25 11:52:55.334396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.178 [2024-07-25 11:52:55.423972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.178 [2024-07-25 11:52:55.424073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.178 [2024-07-25 11:52:55.424073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=900330 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 900330 /var/tmp/spdk2.sock 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 900330 /var/tmp/spdk2.sock 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 900330 /var/tmp/spdk2.sock 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 900330 ']' 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.115 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.115 [2024-07-25 11:52:56.123711] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:19.116 [2024-07-25 11:52:56.123785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900330 ] 00:06:19.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.116 [2024-07-25 11:52:56.220580] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 900152 has claimed it. 00:06:19.116 [2024-07-25 11:52:56.220621] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.684 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (900330) - No such process 00:06:19.684 ERROR: process (pid: 900330) is no longer running 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 900152 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 900152 ']' 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 900152 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 900152 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 900152' 00:06:19.684 killing process with pid 900152 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 900152 00:06:19.684 11:52:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 900152 00:06:19.944 00:06:19.944 real 0m1.946s 00:06:19.944 user 0m5.424s 00:06:19.944 sys 0m0.479s 00:06:19.944 11:52:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.944 11:52:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.944 ************************************ 00:06:19.944 END TEST locking_overlapped_coremask 00:06:19.944 ************************************ 00:06:19.944 11:52:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.944 11:52:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.944 11:52:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.944 11:52:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.203 ************************************ 00:06:20.203 START TEST locking_overlapped_coremask_via_rpc 00:06:20.203 ************************************ 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=900476 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 900476 /var/tmp/spdk.sock 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 900476 ']' 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.203 11:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.203 [2024-07-25 11:52:57.285095] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:20.203 [2024-07-25 11:52:57.285166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900476 ] 00:06:20.203 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.203 [2024-07-25 11:52:57.371190] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.203 [2024-07-25 11:52:57.371222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.203 [2024-07-25 11:52:57.462575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.203 [2024-07-25 11:52:57.462679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.203 [2024-07-25 11:52:57.462680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=900562 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 900562 /var/tmp/spdk2.sock 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 900562 ']' 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.141 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.141 [2024-07-25 11:52:58.149950] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:21.141 [2024-07-25 11:52:58.150018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900562 ] 00:06:21.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.141 [2024-07-25 11:52:58.246022] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.141 [2024-07-25 11:52:58.246055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.141 [2024-07-25 11:52:58.407523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.141 [2024-07-25 11:52:58.410793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.141 [2024-07-25 11:52:58.410794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.710 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.710 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.710 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.710 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.710 11:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.710 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.969 [2024-07-25 11:52:59.017802] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 900476 has claimed it. 00:06:21.969 request: 00:06:21.969 { 00:06:21.969 "method": "framework_enable_cpumask_locks", 00:06:21.969 "req_id": 1 00:06:21.969 } 00:06:21.969 Got JSON-RPC error response 00:06:21.969 response: 00:06:21.969 { 00:06:21.969 "code": -32603, 00:06:21.969 "message": "Failed to claim CPU core: 2" 00:06:21.969 } 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 900476 /var/tmp/spdk.sock 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 900476 ']' 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 900562 /var/tmp/spdk2.sock 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 900562 ']' 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.969 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.228 00:06:22.228 real 0m2.151s 00:06:22.228 user 0m0.866s 00:06:22.228 sys 0m0.208s 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.228 11:52:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.228 ************************************ 00:06:22.228 END TEST locking_overlapped_coremask_via_rpc 00:06:22.228 ************************************ 00:06:22.228 11:52:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:22.228 11:52:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 900476 ]] 00:06:22.228 11:52:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 900476 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 900476 ']' 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 900476 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 900476 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 900476' 00:06:22.228 killing process with pid 900476 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 900476 00:06:22.228 11:52:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 900476 00:06:22.797 11:52:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 900562 ]] 00:06:22.797 11:52:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 900562 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 900562 ']' 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 900562 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 900562 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 900562' 00:06:22.797 killing process with pid 900562 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 900562 00:06:22.797 11:52:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 900562 00:06:23.056 11:53:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.056 11:53:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.056 11:53:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 900476 ]] 00:06:23.056 11:53:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 900476 00:06:23.056 11:53:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 900476 ']' 00:06:23.056 11:53:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 900476 00:06:23.056 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (900476) - No such process 00:06:23.056 11:53:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 900476 is not found' 00:06:23.056 Process with pid 900476 is not found 00:06:23.056 11:53:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 900562 ]] 00:06:23.056 11:53:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 900562 00:06:23.056 11:53:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 900562 ']' 00:06:23.057 11:53:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 900562 00:06:23.057 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (900562) - No such process 00:06:23.057 11:53:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 900562 is not found' 00:06:23.057 Process with pid 900562 is not found 00:06:23.057 11:53:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.057 00:06:23.057 real 0m19.000s 00:06:23.057 user 0m31.333s 00:06:23.057 sys 0m6.303s 00:06:23.057 11:53:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.057 11:53:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.057 ************************************ 00:06:23.057 END TEST cpu_locks 00:06:23.057 ************************************ 00:06:23.057 00:06:23.057 real 0m44.943s 00:06:23.057 user 1m22.837s 00:06:23.057 sys 0m10.911s 00:06:23.057 11:53:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.057 11:53:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.057 ************************************ 00:06:23.057 END TEST event 00:06:23.057 ************************************ 00:06:23.057 11:53:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:23.057 11:53:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.057 11:53:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.057 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.316 ************************************ 00:06:23.316 START TEST thread 00:06:23.316 ************************************ 00:06:23.316 11:53:00 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:23.316 * Looking for test storage... 00:06:23.316 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:23.316 11:53:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.316 11:53:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:23.316 11:53:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.316 11:53:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.316 ************************************ 00:06:23.316 START TEST thread_poller_perf 00:06:23.316 ************************************ 00:06:23.316 11:53:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.316 [2024-07-25 11:53:00.548901] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:23.316 [2024-07-25 11:53:00.549005] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901023 ] 00:06:23.316 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.576 [2024-07-25 11:53:00.637657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.576 [2024-07-25 11:53:00.719753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.576 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.593 ====================================== 00:06:24.593 busy:2305460820 (cyc) 00:06:24.593 total_run_count: 842000 00:06:24.593 tsc_hz: 2300000000 (cyc) 00:06:24.593 ====================================== 00:06:24.593 poller_cost: 2738 (cyc), 1190 (nsec) 00:06:24.593 00:06:24.593 real 0m1.265s 00:06:24.593 user 0m1.150s 00:06:24.593 sys 0m0.110s 00:06:24.593 11:53:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.593 11:53:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.593 ************************************ 00:06:24.593 END TEST thread_poller_perf 00:06:24.593 ************************************ 00:06:24.593 11:53:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.593 11:53:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:24.593 11:53:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.593 11:53:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.593 ************************************ 00:06:24.593 START TEST thread_poller_perf 00:06:24.593 ************************************ 00:06:24.593 11:53:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.909 [2024-07-25 11:53:01.901149] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:24.909 [2024-07-25 11:53:01.901233] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901224 ] 00:06:24.909 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.909 [2024-07-25 11:53:01.990309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.909 [2024-07-25 11:53:02.073411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.909 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.848 ====================================== 00:06:25.848 busy:2301335292 (cyc) 00:06:25.848 total_run_count: 13873000 00:06:25.848 tsc_hz: 2300000000 (cyc) 00:06:25.848 ====================================== 00:06:25.848 poller_cost: 165 (cyc), 71 (nsec) 00:06:25.848 00:06:25.848 real 0m1.262s 00:06:25.848 user 0m1.147s 00:06:25.848 sys 0m0.110s 00:06:25.848 11:53:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.848 11:53:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.848 ************************************ 00:06:25.848 END TEST thread_poller_perf 00:06:25.848 ************************************ 00:06:26.109 11:53:03 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:26.109 11:53:03 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:26.109 11:53:03 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.109 11:53:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.109 11:53:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.109 ************************************ 00:06:26.109 START TEST thread_spdk_lock 00:06:26.109 ************************************ 00:06:26.109 11:53:03 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:26.109 [2024-07-25 11:53:03.246919] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:26.109 [2024-07-25 11:53:03.247007] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901424 ] 00:06:26.109 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.109 [2024-07-25 11:53:03.334331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.368 [2024-07-25 11:53:03.425366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.368 [2024-07-25 11:53:03.425367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.628 [2024-07-25 11:53:03.914293] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.628 [2024-07-25 11:53:03.914327] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:26.628 [2024-07-25 11:53:03.914353] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14d5bc0 00:06:26.628 [2024-07-25 11:53:03.915192] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.628 [2024-07-25 11:53:03.915297] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.628 [2024-07-25 11:53:03.915316] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:26.888 Starting test contend 00:06:26.888 Worker Delay Wait us Hold us Total us 00:06:26.888 0 3 176393 185199 361593 00:06:26.888 1 5 94797 285887 380685 00:06:26.888 PASS test contend 00:06:26.888 Starting test hold_by_poller 00:06:26.888 PASS test hold_by_poller 00:06:26.888 Starting test hold_by_message 00:06:26.888 PASS test hold_by_message 00:06:26.888 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:26.888 100014 assertions passed 00:06:26.888 0 assertions failed 00:06:26.888 00:06:26.888 real 0m0.753s 00:06:26.888 user 0m1.128s 00:06:26.888 sys 0m0.110s 00:06:26.888 11:53:03 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.888 11:53:03 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:26.888 ************************************ 00:06:26.888 END TEST thread_spdk_lock 00:06:26.888 ************************************ 00:06:26.888 00:06:26.888 real 0m3.641s 00:06:26.888 user 0m3.561s 00:06:26.888 sys 0m0.585s 00:06:26.888 11:53:04 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.888 11:53:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.888 ************************************ 00:06:26.888 END TEST thread 00:06:26.888 ************************************ 00:06:26.888 11:53:04 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:26.888 11:53:04 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.888 11:53:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.888 11:53:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.888 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.888 ************************************ 00:06:26.888 START TEST app_cmdline 00:06:26.888 ************************************ 00:06:26.888 11:53:04 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:27.148 * Looking for test storage... 00:06:27.148 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:27.148 11:53:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.148 11:53:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=901590 00:06:27.148 11:53:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 901590 00:06:27.148 11:53:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.148 11:53:04 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 901590 ']' 00:06:27.148 11:53:04 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.148 11:53:04 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.148 11:53:04 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.148 11:53:04 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.148 11:53:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.148 [2024-07-25 11:53:04.250250] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:27.148 [2024-07-25 11:53:04.250343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901590 ] 00:06:27.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.148 [2024-07-25 11:53:04.333449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.148 [2024-07-25 11:53:04.420436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.086 11:53:05 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.086 11:53:05 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:28.086 { 00:06:28.086 "version": "SPDK v24.09-pre git sha1 86fd5638b", 00:06:28.086 "fields": { 00:06:28.086 "major": 24, 00:06:28.086 "minor": 9, 00:06:28.086 "patch": 0, 00:06:28.086 "suffix": "-pre", 00:06:28.086 "commit": "86fd5638b" 00:06:28.086 } 00:06:28.086 } 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.086 11:53:05 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.086 11:53:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.086 11:53:05 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.086 11:53:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:28.087 11:53:05 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.346 request: 00:06:28.346 { 00:06:28.346 "method": "env_dpdk_get_mem_stats", 00:06:28.346 "req_id": 1 00:06:28.346 } 00:06:28.346 Got JSON-RPC error response 00:06:28.346 response: 00:06:28.346 { 00:06:28.346 "code": -32601, 00:06:28.346 "message": "Method not found" 00:06:28.346 } 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.346 11:53:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 901590 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 901590 ']' 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 901590 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.346 11:53:05 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 901590 00:06:28.347 11:53:05 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.347 11:53:05 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.347 11:53:05 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 901590' 00:06:28.347 killing process with pid 901590 00:06:28.347 11:53:05 app_cmdline -- common/autotest_common.sh@969 -- # kill 901590 00:06:28.347 11:53:05 app_cmdline -- common/autotest_common.sh@974 -- # wait 901590 00:06:28.606 00:06:28.606 real 0m1.768s 00:06:28.606 user 0m2.061s 00:06:28.606 sys 0m0.507s 00:06:28.606 11:53:05 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.606 11:53:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.606 ************************************ 00:06:28.606 END TEST app_cmdline 00:06:28.606 ************************************ 00:06:28.866 11:53:05 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:28.866 11:53:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.866 11:53:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.866 11:53:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.866 ************************************ 00:06:28.866 START TEST version 00:06:28.866 ************************************ 00:06:28.866 11:53:05 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:28.866 * Looking for test storage... 00:06:28.866 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:28.866 11:53:06 version -- app/version.sh@17 -- # get_header_version major 00:06:28.866 11:53:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # cut -f2 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.866 11:53:06 version -- app/version.sh@17 -- # major=24 00:06:28.866 11:53:06 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.866 11:53:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # cut -f2 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.866 11:53:06 version -- app/version.sh@18 -- # minor=9 00:06:28.866 11:53:06 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.866 11:53:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # cut -f2 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.866 11:53:06 version -- app/version.sh@19 -- # patch=0 00:06:28.866 11:53:06 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.866 11:53:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # cut -f2 00:06:28.866 11:53:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.866 11:53:06 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.866 11:53:06 version -- app/version.sh@22 -- # version=24.9 00:06:28.866 11:53:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.866 11:53:06 version -- app/version.sh@28 -- # version=24.9rc0 00:06:28.866 11:53:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:28.866 11:53:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.866 11:53:06 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:28.866 11:53:06 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:28.866 00:06:28.866 real 0m0.190s 00:06:28.866 user 0m0.094s 00:06:28.866 sys 0m0.145s 00:06:28.866 11:53:06 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.866 11:53:06 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.866 ************************************ 00:06:28.866 END TEST version 00:06:28.866 ************************************ 00:06:29.127 11:53:06 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@201 -- # [[ 0 -eq 1 ]] 00:06:29.127 11:53:06 -- spdk/autotest.sh@207 -- # uname -s 00:06:29.127 11:53:06 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:06:29.127 11:53:06 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:29.127 11:53:06 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:29.127 11:53:06 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@269 -- # timing_exit lib 00:06:29.127 11:53:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.127 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.127 11:53:06 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@285 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@322 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@336 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@362 -- # '[' 0 -eq 1 ']' 00:06:29.127 11:53:06 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:06:29.127 11:53:06 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:06:29.127 11:53:06 -- spdk/autotest.sh@377 -- # [[ 1 -eq 1 ]] 00:06:29.127 11:53:06 -- spdk/autotest.sh@378 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:29.127 11:53:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.127 11:53:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.127 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.127 ************************************ 00:06:29.127 START TEST llvm_fuzz 00:06:29.127 ************************************ 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:29.127 * Looking for test storage... 00:06:29.127 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:29.127 11:53:06 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.127 11:53:06 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:29.389 ************************************ 00:06:29.389 START TEST nvmf_llvm_fuzz 00:06:29.389 ************************************ 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:29.389 * Looking for test storage... 00:06:29.389 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:29.389 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:29.390 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:29.390 #define SPDK_CONFIG_H 00:06:29.391 #define SPDK_CONFIG_APPS 1 00:06:29.391 #define SPDK_CONFIG_ARCH native 00:06:29.391 #undef SPDK_CONFIG_ASAN 00:06:29.391 #undef SPDK_CONFIG_AVAHI 00:06:29.391 #undef SPDK_CONFIG_CET 00:06:29.391 #define SPDK_CONFIG_COVERAGE 1 00:06:29.391 #define SPDK_CONFIG_CROSS_PREFIX 00:06:29.391 #undef SPDK_CONFIG_CRYPTO 00:06:29.391 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:29.391 #undef SPDK_CONFIG_CUSTOMOCF 00:06:29.391 #undef SPDK_CONFIG_DAOS 00:06:29.391 #define SPDK_CONFIG_DAOS_DIR 00:06:29.391 #define SPDK_CONFIG_DEBUG 1 00:06:29.391 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:29.391 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:29.391 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:29.391 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:29.391 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:29.391 #undef SPDK_CONFIG_DPDK_UADK 00:06:29.391 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:29.391 #define SPDK_CONFIG_EXAMPLES 1 00:06:29.391 #undef SPDK_CONFIG_FC 00:06:29.391 #define SPDK_CONFIG_FC_PATH 00:06:29.391 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:29.391 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:29.391 #undef SPDK_CONFIG_FUSE 00:06:29.391 #define SPDK_CONFIG_FUZZER 1 00:06:29.391 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:29.391 #undef SPDK_CONFIG_GOLANG 00:06:29.391 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:29.391 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:29.391 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:29.391 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:29.391 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:29.391 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:29.391 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:29.391 #define SPDK_CONFIG_IDXD 1 00:06:29.391 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:29.391 #undef SPDK_CONFIG_IPSEC_MB 00:06:29.391 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:29.391 #define SPDK_CONFIG_ISAL 1 00:06:29.391 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:29.391 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:29.391 #define SPDK_CONFIG_LIBDIR 00:06:29.391 #undef SPDK_CONFIG_LTO 00:06:29.391 #define SPDK_CONFIG_MAX_LCORES 128 00:06:29.391 #define SPDK_CONFIG_NVME_CUSE 1 00:06:29.391 #undef SPDK_CONFIG_OCF 00:06:29.391 #define SPDK_CONFIG_OCF_PATH 00:06:29.391 #define SPDK_CONFIG_OPENSSL_PATH 00:06:29.391 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:29.391 #define SPDK_CONFIG_PGO_DIR 00:06:29.391 #undef SPDK_CONFIG_PGO_USE 00:06:29.391 #define SPDK_CONFIG_PREFIX /usr/local 00:06:29.391 #undef SPDK_CONFIG_RAID5F 00:06:29.391 #undef SPDK_CONFIG_RBD 00:06:29.391 #define SPDK_CONFIG_RDMA 1 00:06:29.391 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:29.391 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:29.391 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:29.391 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:29.391 #undef SPDK_CONFIG_SHARED 00:06:29.391 #undef SPDK_CONFIG_SMA 00:06:29.391 #define SPDK_CONFIG_TESTS 1 00:06:29.391 #undef SPDK_CONFIG_TSAN 00:06:29.391 #define SPDK_CONFIG_UBLK 1 00:06:29.391 #define SPDK_CONFIG_UBSAN 1 00:06:29.391 #undef SPDK_CONFIG_UNIT_TESTS 00:06:29.391 #undef SPDK_CONFIG_URING 00:06:29.391 #define SPDK_CONFIG_URING_PATH 00:06:29.391 #undef SPDK_CONFIG_URING_ZNS 00:06:29.391 #undef SPDK_CONFIG_USDT 00:06:29.391 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:29.391 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:29.391 #define SPDK_CONFIG_VFIO_USER 1 00:06:29.391 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:29.391 #define SPDK_CONFIG_VHOST 1 00:06:29.391 #define SPDK_CONFIG_VIRTIO 1 00:06:29.391 #undef SPDK_CONFIG_VTUNE 00:06:29.391 #define SPDK_CONFIG_VTUNE_DIR 00:06:29.391 #define SPDK_CONFIG_WERROR 1 00:06:29.391 #define SPDK_CONFIG_WPDK_DIR 00:06:29.391 #undef SPDK_CONFIG_XNVME 00:06:29.391 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:29.391 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:29.392 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:29.393 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j72 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 902020 ]] 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 902020 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.Ydx0rW 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:06:29.394 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.Ydx0rW/tests/nvmf /tmp/spdk.Ydx0rW 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=945618944 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4338810880 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=50340040704 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742534656 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=11402493952 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30866554880 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871265280 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4710400 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342714368 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348510208 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=5795840 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870933504 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871269376 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=335872 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174248960 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174253056 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:06:29.655 * Looking for test storage... 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=50340040704 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=13617086464 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:29.655 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:29.655 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:29.656 11:53:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:29.656 [2024-07-25 11:53:06.784613] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:29.656 [2024-07-25 11:53:06.784701] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902062 ] 00:06:29.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.916 [2024-07-25 11:53:07.004323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.916 [2024-07-25 11:53:07.075513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.916 [2024-07-25 11:53:07.135311] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.916 [2024-07-25 11:53:07.151644] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:29.916 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.916 INFO: Seed: 3078029702 00:06:29.916 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:29.916 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:29.916 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:29.916 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.916 #2 INITED exec/s: 0 rss: 65Mb 00:06:29.916 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.916 This may also happen if the target rejected all inputs we tried so far 00:06:30.175 [2024-07-25 11:53:07.231800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:30.175 [2024-07-25 11:53:07.231839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.435 NEW_FUNC[1/700]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:30.435 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:30.435 #13 NEW cov: 11941 ft: 11934 corp: 2/103b lim: 320 exec/s: 0 rss: 72Mb L: 102/102 MS: 1 InsertRepeatedBytes- 00:06:30.435 [2024-07-25 11:53:07.632654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:30.435 [2024-07-25 11:53:07.632705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.435 #18 NEW cov: 12071 ft: 12664 corp: 3/227b lim: 320 exec/s: 0 rss: 72Mb L: 124/124 MS: 5 CopyPart-CrossOver-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:06:30.435 [2024-07-25 11:53:07.683096] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4e4e4e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x4e4e4e4e4e4e1e1e 00:06:30.435 [2024-07-25 11:53:07.683123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.435 [2024-07-25 11:53:07.683205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:30.435 [2024-07-25 11:53:07.683220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.435 #24 NEW cov: 12099 ft: 13184 corp: 4/374b lim: 320 exec/s: 0 rss: 72Mb L: 147/147 MS: 1 InsertRepeatedBytes- 00:06:30.694 [2024-07-25 11:53:07.742999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:30.694 [2024-07-25 11:53:07.743026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.694 #25 NEW cov: 12184 ft: 13473 corp: 5/476b lim: 320 exec/s: 0 rss: 72Mb L: 102/147 MS: 1 ChangeByte- 00:06:30.694 [2024-07-25 11:53:07.803163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:30.694 [2024-07-25 11:53:07.803189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.694 #36 NEW cov: 12184 ft: 13528 corp: 6/578b lim: 320 exec/s: 0 rss: 72Mb L: 102/147 MS: 1 ChangeBit- 00:06:30.694 [2024-07-25 11:53:07.863405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:30.694 [2024-07-25 11:53:07.863430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.694 #37 NEW cov: 12184 ft: 13604 corp: 7/681b lim: 320 exec/s: 0 rss: 73Mb L: 103/147 MS: 1 InsertByte- 00:06:30.694 [2024-07-25 11:53:07.923615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:30.694 [2024-07-25 11:53:07.923641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.694 #38 NEW cov: 12184 ft: 13656 corp: 8/784b lim: 320 exec/s: 0 rss: 73Mb L: 103/147 MS: 1 CrossOver- 00:06:30.694 [2024-07-25 11:53:07.974049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.694 [2024-07-25 11:53:07.974074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.694 [2024-07-25 11:53:07.974172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.694 [2024-07-25 11:53:07.974188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.694 [2024-07-25 11:53:07.974282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e7) qid:0 cid:6 nsid:eaeaeaea cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:30.694 [2024-07-25 11:53:07.974300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.954 NEW_FUNC[1/1]: 0x139b500 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2093 00:06:30.954 #39 NEW cov: 12215 ft: 13877 corp: 9/1027b lim: 320 exec/s: 0 rss: 73Mb L: 243/243 MS: 1 InsertRepeatedBytes- 00:06:30.954 [2024-07-25 11:53:08.024432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.954 [2024-07-25 11:53:08.024458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.954 [2024-07-25 11:53:08.024555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.954 [2024-07-25 11:53:08.024570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.954 [2024-07-25 11:53:08.024662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:6 nsid:eaeaeaea cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:30.954 [2024-07-25 11:53:08.024677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.954 #40 NEW cov: 12215 ft: 13940 corp: 10/1271b lim: 320 exec/s: 0 rss: 73Mb L: 244/244 MS: 1 InsertByte- 00:06:30.954 [2024-07-25 11:53:08.084089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:30.954 [2024-07-25 11:53:08.084114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.954 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:30.954 #46 NEW cov: 12238 ft: 14018 corp: 11/1362b lim: 320 exec/s: 0 rss: 73Mb L: 91/244 MS: 1 EraseBytes- 00:06:30.954 [2024-07-25 11:53:08.134863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.954 [2024-07-25 11:53:08.134888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.954 [2024-07-25 11:53:08.134990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:30.955 [2024-07-25 11:53:08.135006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.955 [2024-07-25 11:53:08.135107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e7) qid:0 cid:6 nsid:eaeaeaea cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:30.955 [2024-07-25 11:53:08.135124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.955 [2024-07-25 11:53:08.135216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:7 nsid:eaeaeaea cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e4e1e1e1e 00:06:30.955 [2024-07-25 11:53:08.135233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.955 #47 NEW cov: 12238 ft: 14194 corp: 12/1620b lim: 320 exec/s: 0 rss: 73Mb L: 258/258 MS: 1 CrossOver- 00:06:30.955 [2024-07-25 11:53:08.184612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:30.955 [2024-07-25 11:53:08.184638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.955 [2024-07-25 11:53:08.184749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:5 nsid:eaeaeaea cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeae8 00:06:30.955 [2024-07-25 11:53:08.184782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.955 #48 NEW cov: 12238 ft: 14204 corp: 13/1802b lim: 320 exec/s: 48 rss: 73Mb L: 182/258 MS: 1 InsertRepeatedBytes- 00:06:30.955 [2024-07-25 11:53:08.255185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4e4e4e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x4e4e4e4e4e4e1e1e 00:06:30.955 [2024-07-25 11:53:08.255213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.955 [2024-07-25 11:53:08.255305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:30.955 [2024-07-25 11:53:08.255321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.214 #49 NEW cov: 12238 ft: 14220 corp: 14/1949b lim: 320 exec/s: 49 rss: 73Mb L: 147/258 MS: 1 ShuffleBytes- 00:06:31.214 [2024-07-25 11:53:08.325571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.214 [2024-07-25 11:53:08.325601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.214 [2024-07-25 11:53:08.325700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:5 nsid:eaeaeaea cdw10:eaeaeaea cdw11:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.214 [2024-07-25 11:53:08.325716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.214 [2024-07-25 11:53:08.325806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:6 nsid:eaeaeaea cdw10:eaeaeaea cdw11:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.214 [2024-07-25 11:53:08.325822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.214 #55 NEW cov: 12238 ft: 14274 corp: 15/2165b lim: 320 exec/s: 55 rss: 73Mb L: 216/258 MS: 1 CrossOver- 00:06:31.214 [2024-07-25 11:53:08.395592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4e4e4e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x4e4e4e4e4e4e1e1e 00:06:31.214 [2024-07-25 11:53:08.395622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.214 [2024-07-25 11:53:08.395725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:31.214 [2024-07-25 11:53:08.395747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.214 #56 NEW cov: 12238 ft: 14286 corp: 16/2312b lim: 320 exec/s: 56 rss: 73Mb L: 147/258 MS: 1 ChangeByte- 00:06:31.214 [2024-07-25 11:53:08.466057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.214 [2024-07-25 11:53:08.466085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.214 [2024-07-25 11:53:08.466189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:5 nsid:eaeaeaea cdw10:eaeaeaea cdw11:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.214 [2024-07-25 11:53:08.466207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.214 [2024-07-25 11:53:08.466304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:6 nsid:eaeaeaea cdw10:eaeaeaea cdw11:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.214 [2024-07-25 11:53:08.466321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.214 #57 NEW cov: 12238 ft: 14303 corp: 17/2529b lim: 320 exec/s: 57 rss: 73Mb L: 217/258 MS: 1 InsertByte- 00:06:31.473 [2024-07-25 11:53:08.536110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:4e4e4e4e SGL TRANSPORT DATA BLOCK TRANSPORT 0x4e4e4e4e4e4e1e1e 00:06:31.473 [2024-07-25 11:53:08.536139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.473 [2024-07-25 11:53:08.536230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e601e 00:06:31.473 [2024-07-25 11:53:08.536247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.473 #58 NEW cov: 12238 ft: 14306 corp: 18/2676b lim: 320 exec/s: 58 rss: 73Mb L: 147/258 MS: 1 ChangeByte- 00:06:31.473 [2024-07-25 11:53:08.586051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.473 [2024-07-25 11:53:08.586078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.473 #59 NEW cov: 12238 ft: 14404 corp: 19/2780b lim: 320 exec/s: 59 rss: 73Mb L: 104/258 MS: 1 InsertByte- 00:06:31.473 [2024-07-25 11:53:08.636215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.473 [2024-07-25 11:53:08.636242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.473 #60 NEW cov: 12238 ft: 14424 corp: 20/2882b lim: 320 exec/s: 60 rss: 73Mb L: 102/258 MS: 1 ChangeBit- 00:06:31.473 [2024-07-25 11:53:08.686269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.473 [2024-07-25 11:53:08.686295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.473 #61 NEW cov: 12238 ft: 14451 corp: 21/2985b lim: 320 exec/s: 61 rss: 73Mb L: 103/258 MS: 1 ChangeByte- 00:06:31.473 [2024-07-25 11:53:08.736756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.473 [2024-07-25 11:53:08.736782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.473 [2024-07-25 11:53:08.736887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:5 nsid:eaeaeaea cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeae8 00:06:31.473 [2024-07-25 11:53:08.736904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.473 #62 NEW cov: 12238 ft: 14460 corp: 22/3167b lim: 320 exec/s: 62 rss: 73Mb L: 182/258 MS: 1 ChangeBinInt- 00:06:31.733 [2024-07-25 11:53:08.786671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.733 [2024-07-25 11:53:08.786697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.733 #63 NEW cov: 12238 ft: 14491 corp: 23/3270b lim: 320 exec/s: 63 rss: 73Mb L: 103/258 MS: 1 ChangeBit- 00:06:31.733 [2024-07-25 11:53:08.847341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:31.733 [2024-07-25 11:53:08.847367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.733 [2024-07-25 11:53:08.847475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:31.733 [2024-07-25 11:53:08.847492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.733 [2024-07-25 11:53:08.847582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:eaeaeaea cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:31.733 [2024-07-25 11:53:08.847596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.733 #64 NEW cov: 12238 ft: 14509 corp: 24/3522b lim: 320 exec/s: 64 rss: 74Mb L: 252/258 MS: 1 CMP- DE: "\000r\270\020\020\000\000\000"- 00:06:31.733 [2024-07-25 11:53:08.907171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.733 [2024-07-25 11:53:08.907196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.733 #65 NEW cov: 12238 ft: 14544 corp: 25/3619b lim: 320 exec/s: 65 rss: 74Mb L: 97/258 MS: 1 EraseBytes- 00:06:31.733 [2024-07-25 11:53:08.967355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.733 [2024-07-25 11:53:08.967381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.733 #66 NEW cov: 12238 ft: 14559 corp: 26/3710b lim: 320 exec/s: 66 rss: 74Mb L: 91/258 MS: 1 ChangeBinInt- 00:06:31.733 [2024-07-25 11:53:09.027926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.733 [2024-07-25 11:53:09.027951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.733 [2024-07-25 11:53:09.028046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:5 nsid:eaeaeaea cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeae8ea 00:06:31.733 [2024-07-25 11:53:09.028061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.993 #67 NEW cov: 12238 ft: 14571 corp: 27/3893b lim: 320 exec/s: 67 rss: 74Mb L: 183/258 MS: 1 InsertByte- 00:06:31.993 [2024-07-25 11:53:09.087759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e3) qid:0 cid:4 nsid:e3e3e3e3 cdw10:e3e3e3e3 cdw11:e3e3e3e3 SGL TRANSPORT DATA BLOCK TRANSPORT 0xe3e3e3e3e3e3e3e3 00:06:31.993 [2024-07-25 11:53:09.087785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.993 #70 NEW cov: 12238 ft: 14609 corp: 28/3963b lim: 320 exec/s: 70 rss: 74Mb L: 70/258 MS: 3 InsertRepeatedBytes-ShuffleBytes-InsertRepeatedBytes- 00:06:31.993 [2024-07-25 11:53:09.138518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:06:31.993 [2024-07-25 11:53:09.138544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.993 [2024-07-25 11:53:09.138646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.993 [2024-07-25 11:53:09.138661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.993 [2024-07-25 11:53:09.138751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:1e1e1e1e cdw11:1e1e1e1e 00:06:31.993 [2024-07-25 11:53:09.138777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.993 #71 NEW cov: 12240 ft: 14645 corp: 29/4168b lim: 320 exec/s: 71 rss: 74Mb L: 205/258 MS: 1 InsertRepeatedBytes- 00:06:31.993 [2024-07-25 11:53:09.188176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:06:31.993 [2024-07-25 11:53:09.188202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.993 #72 NEW cov: 12240 ft: 14652 corp: 30/4270b lim: 320 exec/s: 36 rss: 74Mb L: 102/258 MS: 1 ChangeBit- 00:06:31.993 #72 DONE cov: 12240 ft: 14652 corp: 30/4270b lim: 320 exec/s: 36 rss: 74Mb 00:06:31.993 ###### Recommended dictionary. ###### 00:06:31.993 "\000r\270\020\020\000\000\000" # Uses: 0 00:06:31.993 ###### End of recommended dictionary. ###### 00:06:31.993 Done 72 runs in 2 second(s) 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:32.253 11:53:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:32.253 [2024-07-25 11:53:09.381777] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:32.253 [2024-07-25 11:53:09.381848] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902427 ] 00:06:32.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.512 [2024-07-25 11:53:09.600618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.512 [2024-07-25 11:53:09.674501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.512 [2024-07-25 11:53:09.734547] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.512 [2024-07-25 11:53:09.750873] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:32.512 INFO: Running with entropic power schedule (0xFF, 100). 00:06:32.512 INFO: Seed: 1382038131 00:06:32.512 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:32.512 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:32.512 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:32.513 INFO: A corpus is not provided, starting from an empty corpus 00:06:32.513 #2 INITED exec/s: 0 rss: 65Mb 00:06:32.513 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:32.513 This may also happen if the target rejected all inputs we tried so far 00:06:32.772 [2024-07-25 11:53:09.827640] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10512) > buf size (4096) 00:06:32.772 [2024-07-25 11:53:09.828153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.772 [2024-07-25 11:53:09.828197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.032 NEW_FUNC[1/701]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:33.032 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.032 #3 NEW cov: 12059 ft: 12055 corp: 2/10b lim: 30 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CMP- DE: "C\000\000\000\000\000\000\000"- 00:06:33.032 [2024-07-25 11:53:10.178497] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:33.032 [2024-07-25 11:53:10.178789] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (800524) > buf size (4096) 00:06:33.032 [2024-07-25 11:53:10.179278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-07-25 11:53:10.179330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.033 [2024-07-25 11:53:10.179433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0dc28384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-07-25 11:53:10.179455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.033 #4 NEW cov: 12178 ft: 13032 corp: 3/27b lim: 30 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 CMP- DE: "\001\032\015\302\204\207\314\330"- 00:06:33.033 [2024-07-25 11:53:10.248970] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10512) > buf size (4096) 00:06:33.033 [2024-07-25 11:53:10.249237] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007f7f 00:06:33.033 [2024-07-25 11:53:10.249487] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (130560) > buf size (4096) 00:06:33.033 [2024-07-25 11:53:10.249939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-07-25 11:53:10.249970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.033 [2024-07-25 11:53:10.250063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7f7f837f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-07-25 11:53:10.250081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.033 [2024-07-25 11:53:10.250181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:7f7f007f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-07-25 11:53:10.250196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.033 #5 NEW cov: 12184 ft: 13449 corp: 4/45b lim: 30 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:33.033 [2024-07-25 11:53:10.299407] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:33.033 [2024-07-25 11:53:10.299658] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (799928) > buf size (4096) 00:06:33.033 [2024-07-25 11:53:10.300126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-07-25 11:53:10.300155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.033 [2024-07-25 11:53:10.300246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0d2d8384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-07-25 11:53:10.300264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.293 #6 NEW cov: 12269 ft: 13798 corp: 5/62b lim: 30 exec/s: 0 rss: 72Mb L: 17/18 MS: 1 ChangeByte- 00:06:33.293 [2024-07-25 11:53:10.370218] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:33.293 [2024-07-25 11:53:10.371711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.371742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.371849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.371866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.371964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.371980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.372076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.372091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.372183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.372198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.293 #12 NEW cov: 12286 ft: 14447 corp: 6/92b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:33.293 [2024-07-25 11:53:10.430211] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:06:33.293 [2024-07-25 11:53:10.430481] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (787640) > buf size (4096) 00:06:33.293 [2024-07-25 11:53:10.430976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4300020d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.431007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.431103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:012d8384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.431121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.293 #13 NEW cov: 12286 ft: 14536 corp: 7/109b lim: 30 exec/s: 0 rss: 72Mb L: 17/30 MS: 1 ShuffleBytes- 00:06:33.293 [2024-07-25 11:53:10.500659] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:33.293 [2024-07-25 11:53:10.502137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.502168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.502269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.502285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.502379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.502396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.293 [2024-07-25 11:53:10.502491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.293 [2024-07-25 11:53:10.502507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.294 [2024-07-25 11:53:10.502598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.294 [2024-07-25 11:53:10.502613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.294 #14 NEW cov: 12286 ft: 14595 corp: 8/139b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:06:33.294 [2024-07-25 11:53:10.570797] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:33.294 [2024-07-25 11:53:10.571057] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (800524) > buf size (4096) 00:06:33.294 [2024-07-25 11:53:10.571298] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:06:33.294 [2024-07-25 11:53:10.571993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.294 [2024-07-25 11:53:10.572021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.294 [2024-07-25 11:53:10.572121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0dc28384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.294 [2024-07-25 11:53:10.572136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.294 [2024-07-25 11:53:10.572229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0a008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.294 [2024-07-25 11:53:10.572244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.294 [2024-07-25 11:53:10.572339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.294 [2024-07-25 11:53:10.572355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.553 #15 NEW cov: 12286 ft: 14731 corp: 9/165b lim: 30 exec/s: 0 rss: 72Mb L: 26/30 MS: 1 CrossOver- 00:06:33.553 [2024-07-25 11:53:10.620793] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10512) > buf size (4096) 00:06:33.554 [2024-07-25 11:53:10.621249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.621276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.554 #16 NEW cov: 12286 ft: 14843 corp: 10/175b lim: 30 exec/s: 0 rss: 72Mb L: 10/30 MS: 1 InsertByte- 00:06:33.554 [2024-07-25 11:53:10.670969] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10512) > buf size (4096) 00:06:33.554 [2024-07-25 11:53:10.671417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.671446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.554 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:33.554 #17 NEW cov: 12309 ft: 14937 corp: 11/185b lim: 30 exec/s: 0 rss: 72Mb L: 10/30 MS: 1 InsertByte- 00:06:33.554 [2024-07-25 11:53:10.721728] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:33.554 [2024-07-25 11:53:10.722035] ctrlr.c:2689:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:06:33.554 [2024-07-25 11:53:10.723256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.723284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.723380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.723397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.723496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.723513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.723609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.723625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.723722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.723741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.554 #18 NEW cov: 12315 ft: 15024 corp: 12/215b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CrossOver- 00:06:33.554 [2024-07-25 11:53:10.771981] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7f 00:06:33.554 [2024-07-25 11:53:10.772242] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (916992) > buf size (4096) 00:06:33.554 [2024-07-25 11:53:10.772686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.772713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.772819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7f7f837f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.772835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.554 #19 NEW cov: 12315 ft: 15062 corp: 13/232b lim: 30 exec/s: 0 rss: 72Mb L: 17/30 MS: 1 CrossOver- 00:06:33.554 [2024-07-25 11:53:10.822347] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:33.554 [2024-07-25 11:53:10.822606] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (800524) > buf size (4096) 00:06:33.554 [2024-07-25 11:53:10.822877] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:06:33.554 [2024-07-25 11:53:10.823569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.823600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.823686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0dc28384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.823703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.823798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0a008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.823815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.554 [2024-07-25 11:53:10.823916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.554 [2024-07-25 11:53:10.823933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.813 #20 NEW cov: 12315 ft: 15110 corp: 14/261b lim: 30 exec/s: 20 rss: 72Mb L: 29/30 MS: 1 InsertRepeatedBytes- 00:06:33.813 [2024-07-25 11:53:10.892397] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:33.813 [2024-07-25 11:53:10.893105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.813 [2024-07-25 11:53:10.893132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.813 [2024-07-25 11:53:10.893226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.813 [2024-07-25 11:53:10.893243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.813 #21 NEW cov: 12315 ft: 15133 corp: 15/278b lim: 30 exec/s: 21 rss: 73Mb L: 17/30 MS: 1 EraseBytes- 00:06:33.813 [2024-07-25 11:53:10.962722] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:33.813 [2024-07-25 11:53:10.962987] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (800524) > buf size (4096) 00:06:33.813 [2024-07-25 11:53:10.963234] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:06:33.813 [2024-07-25 11:53:10.963484] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfa 00:06:33.813 [2024-07-25 11:53:10.963970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.813 [2024-07-25 11:53:10.964001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.813 [2024-07-25 11:53:10.964095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0dc28384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.813 [2024-07-25 11:53:10.964112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.813 [2024-07-25 11:53:10.964206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0a008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.813 [2024-07-25 11:53:10.964223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.813 [2024-07-25 11:53:10.964317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.813 [2024-07-25 11:53:10.964335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.813 #22 NEW cov: 12315 ft: 15160 corp: 16/304b lim: 30 exec/s: 22 rss: 73Mb L: 26/30 MS: 1 ChangeBinInt- 00:06:33.813 [2024-07-25 11:53:11.012966] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:33.813 [2024-07-25 11:53:11.013206] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (800524) > buf size (4096) 00:06:33.813 [2024-07-25 11:53:11.013458] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:06:33.813 [2024-07-25 11:53:11.013715] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfa 00:06:33.813 [2024-07-25 11:53:11.014174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.814 [2024-07-25 11:53:11.014204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.814 [2024-07-25 11:53:11.014294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0dc28384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.814 [2024-07-25 11:53:11.014310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.814 [2024-07-25 11:53:11.014408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0a008340 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.814 [2024-07-25 11:53:11.014423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.814 [2024-07-25 11:53:11.014519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.814 [2024-07-25 11:53:11.014535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.814 #23 NEW cov: 12315 ft: 15187 corp: 17/330b lim: 30 exec/s: 23 rss: 73Mb L: 26/30 MS: 1 ChangeBit- 00:06:33.814 [2024-07-25 11:53:11.083094] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000001 00:06:33.814 [2024-07-25 11:53:11.083379] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (46624) > buf size (4096) 00:06:33.814 [2024-07-25 11:53:11.083822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4300020d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.814 [2024-07-25 11:53:11.083851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.814 [2024-07-25 11:53:11.083952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2d87000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.814 [2024-07-25 11:53:11.083969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.073 #24 NEW cov: 12315 ft: 15198 corp: 18/347b lim: 30 exec/s: 24 rss: 73Mb L: 17/30 MS: 1 ShuffleBytes- 00:06:34.073 [2024-07-25 11:53:11.143429] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:34.073 [2024-07-25 11:53:11.143685] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (799928) > buf size (4096) 00:06:34.073 [2024-07-25 11:53:11.144127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.144158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.073 [2024-07-25 11:53:11.144251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0d2d8384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.144270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.073 #25 NEW cov: 12315 ft: 15226 corp: 19/364b lim: 30 exec/s: 25 rss: 73Mb L: 17/30 MS: 1 ChangeByte- 00:06:34.073 [2024-07-25 11:53:11.193487] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000d0d 00:06:34.073 [2024-07-25 11:53:11.193964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0d810d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.193994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.073 #26 NEW cov: 12315 ft: 15259 corp: 20/374b lim: 30 exec/s: 26 rss: 73Mb L: 10/30 MS: 1 InsertRepeatedBytes- 00:06:34.073 [2024-07-25 11:53:11.243831] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000001 00:06:34.073 [2024-07-25 11:53:11.244096] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (46624) > buf size (4096) 00:06:34.073 [2024-07-25 11:53:11.244539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4300020d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.244570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.073 [2024-07-25 11:53:11.244669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2d87000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.244686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.073 #27 NEW cov: 12315 ft: 15303 corp: 21/391b lim: 30 exec/s: 27 rss: 73Mb L: 17/30 MS: 1 ChangeByte- 00:06:34.073 [2024-07-25 11:53:11.314266] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:34.073 [2024-07-25 11:53:11.314526] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:34.073 [2024-07-25 11:53:11.314798] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:34.073 [2024-07-25 11:53:11.315234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4383ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.315263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.073 [2024-07-25 11:53:11.315361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.315378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.073 [2024-07-25 11:53:11.315472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.315489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.073 #28 NEW cov: 12315 ft: 15320 corp: 22/413b lim: 30 exec/s: 28 rss: 73Mb L: 22/30 MS: 1 InsertRepeatedBytes- 00:06:34.073 [2024-07-25 11:53:11.364487] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:34.073 [2024-07-25 11:53:11.364734] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (209764) > buf size (4096) 00:06:34.073 [2024-07-25 11:53:11.365944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.073 [2024-07-25 11:53:11.365972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.074 [2024-07-25 11:53:11.366063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ccd80000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.074 [2024-07-25 11:53:11.366078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.074 [2024-07-25 11:53:11.366174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.074 [2024-07-25 11:53:11.366190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.074 [2024-07-25 11:53:11.366282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.074 [2024-07-25 11:53:11.366297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.074 [2024-07-25 11:53:11.366392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.074 [2024-07-25 11:53:11.366408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.333 #29 NEW cov: 12315 ft: 15331 corp: 23/443b lim: 30 exec/s: 29 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:06:34.333 [2024-07-25 11:53:11.415032] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.415289] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (209764) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.416006] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.416473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.416502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.416596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ccd80000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.416612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.416707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.416723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.416830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.416845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.416941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:000002ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.416956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.333 #30 NEW cov: 12315 ft: 15358 corp: 24/473b lim: 30 exec/s: 30 rss: 73Mb L: 30/30 MS: 1 CMP- DE: "\377\036"- 00:06:34.333 [2024-07-25 11:53:11.485225] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:34.333 [2024-07-25 11:53:11.485486] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (275464) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.485752] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (925492) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.486006] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfa 00:06:34.333 [2024-07-25 11:53:11.486475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.486505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.486598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0d01811a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.486615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.486713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:87cc83d8 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.486728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.486831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.486847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.333 #31 NEW cov: 12315 ft: 15385 corp: 25/499b lim: 30 exec/s: 31 rss: 73Mb L: 26/30 MS: 1 PersAutoDict- DE: "\001\032\015\302\204\207\314\330"- 00:06:34.333 [2024-07-25 11:53:11.535203] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:34.333 [2024-07-25 11:53:11.535468] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (226360) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.535717] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (221188) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.536181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.536212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.536308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:dd0d002d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.536323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.536418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d800003b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.536433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.333 #32 NEW cov: 12315 ft: 15392 corp: 26/517b lim: 30 exec/s: 32 rss: 73Mb L: 18/30 MS: 1 InsertByte- 00:06:34.333 [2024-07-25 11:53:11.605472] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:34.333 [2024-07-25 11:53:11.606178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.606207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.333 [2024-07-25 11:53:11.606298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.333 [2024-07-25 11:53:11.606315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.593 #33 NEW cov: 12315 ft: 15491 corp: 27/534b lim: 30 exec/s: 33 rss: 73Mb L: 17/30 MS: 1 ShuffleBytes- 00:06:34.593 [2024-07-25 11:53:11.665587] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000d0a 00:06:34.593 [2024-07-25 11:53:11.666075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff1e810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.593 [2024-07-25 11:53:11.666103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.593 #35 NEW cov: 12315 ft: 15530 corp: 28/543b lim: 30 exec/s: 35 rss: 73Mb L: 9/30 MS: 2 PersAutoDict-CrossOver- DE: "\377\036"- 00:06:34.593 [2024-07-25 11:53:11.716152] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:34.593 [2024-07-25 11:53:11.716446] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (799928) > buf size (4096) 00:06:34.593 [2024-07-25 11:53:11.716904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.593 [2024-07-25 11:53:11.716933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.593 [2024-07-25 11:53:11.717023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0d2d8384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.593 [2024-07-25 11:53:11.717039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.593 #36 NEW cov: 12315 ft: 15553 corp: 29/560b lim: 30 exec/s: 36 rss: 73Mb L: 17/30 MS: 1 ChangeBit- 00:06:34.593 [2024-07-25 11:53:11.766455] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796944) > buf size (4096) 00:06:34.593 [2024-07-25 11:53:11.766713] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008487 00:06:34.593 [2024-07-25 11:53:11.766976] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x43 00:06:34.593 [2024-07-25 11:53:11.767685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a438300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.767713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.594 [2024-07-25 11:53:11.767816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:011a020d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.767833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.594 [2024-07-25 11:53:11.767922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ccd8000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.767938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.594 [2024-07-25 11:53:11.768035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.768051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.594 #37 NEW cov: 12315 ft: 15560 corp: 30/588b lim: 30 exec/s: 37 rss: 73Mb L: 28/30 MS: 1 PersAutoDict- DE: "\377\036"- 00:06:34.594 [2024-07-25 11:53:11.816652] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11a 00:06:34.594 [2024-07-25 11:53:11.816933] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (800524) > buf size (4096) 00:06:34.594 [2024-07-25 11:53:11.817198] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:06:34.594 [2024-07-25 11:53:11.817453] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfa 00:06:34.594 [2024-07-25 11:53:11.817928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a430000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.817957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.594 [2024-07-25 11:53:11.818051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0dc28384 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.818066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.594 [2024-07-25 11:53:11.818172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0a008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.818189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.594 [2024-07-25 11:53:11.818284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.594 [2024-07-25 11:53:11.818299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.594 #38 NEW cov: 12315 ft: 15576 corp: 31/614b lim: 30 exec/s: 19 rss: 73Mb L: 26/30 MS: 1 ChangeByte- 00:06:34.594 #38 DONE cov: 12315 ft: 15576 corp: 31/614b lim: 30 exec/s: 19 rss: 73Mb 00:06:34.594 ###### Recommended dictionary. ###### 00:06:34.594 "C\000\000\000\000\000\000\000" # Uses: 0 00:06:34.594 "\001\032\015\302\204\207\314\330" # Uses: 1 00:06:34.594 "\377\036" # Uses: 2 00:06:34.594 ###### End of recommended dictionary. ###### 00:06:34.594 Done 38 runs in 2 second(s) 00:06:34.853 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.853 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.853 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.853 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:34.853 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:34.853 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.853 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.854 11:53:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:34.854 [2024-07-25 11:53:12.011546] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:34.854 [2024-07-25 11:53:12.011637] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902799 ] 00:06:34.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.113 [2024-07-25 11:53:12.222984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.113 [2024-07-25 11:53:12.293301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.113 [2024-07-25 11:53:12.353068] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.113 [2024-07-25 11:53:12.369380] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:35.113 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.113 INFO: Seed: 4001055941 00:06:35.113 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:35.113 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:35.113 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:35.113 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.113 #2 INITED exec/s: 0 rss: 65Mb 00:06:35.113 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:35.113 This may also happen if the target rejected all inputs we tried so far 00:06:35.372 [2024-07-25 11:53:12.447579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.372 [2024-07-25 11:53:12.447623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.372 [2024-07-25 11:53:12.447745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.372 [2024-07-25 11:53:12.447764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.372 [2024-07-25 11:53:12.447865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.372 [2024-07-25 11:53:12.447882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.372 [2024-07-25 11:53:12.447986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.372 [2024-07-25 11:53:12.448003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.632 NEW_FUNC[1/700]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:35.632 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:35.632 #8 NEW cov: 11980 ft: 11972 corp: 2/35b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:35.632 [2024-07-25 11:53:12.808346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.808398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.808503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.808524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.808633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.808654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.808771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.808792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.632 #14 NEW cov: 12110 ft: 12515 corp: 3/69b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBit- 00:06:35.632 [2024-07-25 11:53:12.878514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.878547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.878646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.878663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.878756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.878781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.878872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.878888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.632 #15 NEW cov: 12116 ft: 12863 corp: 4/103b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:06:35.632 [2024-07-25 11:53:12.929003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.929030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.929132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.929149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.929245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.929263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.929361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:60190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.929377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.632 [2024-07-25 11:53:12.929482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.632 [2024-07-25 11:53:12.929497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.892 #21 NEW cov: 12201 ft: 13196 corp: 5/138b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertByte- 00:06:35.892 [2024-07-25 11:53:12.998590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:12.998618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:12.998713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:12.998729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.892 #22 NEW cov: 12201 ft: 13806 corp: 6/157b lim: 35 exec/s: 0 rss: 72Mb L: 19/35 MS: 1 EraseBytes- 00:06:35.892 [2024-07-25 11:53:13.059957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.059988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.060082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:2c001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.060098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.060201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.060217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.060328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.060343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.060436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.060451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.892 #23 NEW cov: 12201 ft: 13956 corp: 7/192b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertByte- 00:06:35.892 [2024-07-25 11:53:13.109968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.109995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.110091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:2c001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.110107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.110207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.110223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.110324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.110338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.892 #24 NEW cov: 12201 ft: 13990 corp: 8/221b lim: 35 exec/s: 0 rss: 72Mb L: 29/35 MS: 1 EraseBytes- 00:06:35.892 [2024-07-25 11:53:13.170118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000e cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.170145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.170240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.170257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.170346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.170361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.892 [2024-07-25 11:53:13.170459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.892 [2024-07-25 11:53:13.170476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.892 #25 NEW cov: 12201 ft: 14008 corp: 9/255b lim: 35 exec/s: 0 rss: 72Mb L: 34/35 MS: 1 ChangeBit- 00:06:36.151 [2024-07-25 11:53:13.220476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:0a001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.220504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.220599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.220615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.220715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002c19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.220730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.220845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.220862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.151 #26 NEW cov: 12201 ft: 14051 corp: 10/285b lim: 35 exec/s: 0 rss: 72Mb L: 30/35 MS: 1 CrossOver- 00:06:36.151 [2024-07-25 11:53:13.280936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:0a001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.280963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.281057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190011 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.281073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.281169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002c19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.281184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.281286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.281302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.151 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:36.151 #27 NEW cov: 12224 ft: 14171 corp: 11/315b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 1 ChangeBit- 00:06:36.151 [2024-07-25 11:53:13.351095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2c19000a cdw11:0a001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.351123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.351220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.351237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.351337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002c19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.351353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.351466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.351483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.151 #28 NEW cov: 12224 ft: 14206 corp: 12/345b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 1 ChangeByte- 00:06:36.151 [2024-07-25 11:53:13.401343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:0a001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.401373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.401476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190011 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.401492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.401593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002c19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.401608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.151 [2024-07-25 11:53:13.401709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19000a19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.151 [2024-07-25 11:53:13.401725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.151 #29 NEW cov: 12224 ft: 14274 corp: 13/375b lim: 35 exec/s: 29 rss: 73Mb L: 30/35 MS: 1 CrossOver- 00:06:36.409 [2024-07-25 11:53:13.461395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.409 [2024-07-25 11:53:13.461423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.409 [2024-07-25 11:53:13.461526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.409 [2024-07-25 11:53:13.461543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.409 [2024-07-25 11:53:13.461633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.409 [2024-07-25 11:53:13.461650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.409 #30 NEW cov: 12224 ft: 14487 corp: 14/400b lim: 35 exec/s: 30 rss: 73Mb L: 25/35 MS: 1 CrossOver- 00:06:36.409 [2024-07-25 11:53:13.531251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.409 [2024-07-25 11:53:13.531279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.531377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.531394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.410 #32 NEW cov: 12224 ft: 14504 corp: 15/414b lim: 35 exec/s: 32 rss: 73Mb L: 14/35 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:36.410 [2024-07-25 11:53:13.582550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.582578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.582680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.582697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.582797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.582815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.582919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:60190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.582935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.583031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.583046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.410 #33 NEW cov: 12224 ft: 14520 corp: 16/449b lim: 35 exec/s: 33 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:06:36.410 [2024-07-25 11:53:13.632563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000e cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.632593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.632704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.632740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.632836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002a19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.632852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.632958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.632975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.410 #34 NEW cov: 12224 ft: 14531 corp: 17/483b lim: 35 exec/s: 34 rss: 73Mb L: 34/35 MS: 1 ChangeByte- 00:06:36.410 [2024-07-25 11:53:13.703137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.703168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.703290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:2c001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.703308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.703409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:ee001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.703426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.703530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.703550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.410 [2024-07-25 11:53:13.703651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.410 [2024-07-25 11:53:13.703668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.668 #35 NEW cov: 12224 ft: 14555 corp: 18/518b lim: 35 exec/s: 35 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:36.668 [2024-07-25 11:53:13.752920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.752949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.753065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.753084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.753180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.753196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.753299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.753317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.668 #36 NEW cov: 12224 ft: 14573 corp: 19/552b lim: 35 exec/s: 36 rss: 73Mb L: 34/35 MS: 1 ChangeByte- 00:06:36.668 [2024-07-25 11:53:13.803111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000e cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.803140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.803244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.803262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.803358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002a19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.803375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.803482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.803499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.668 #37 NEW cov: 12224 ft: 14591 corp: 20/586b lim: 35 exec/s: 37 rss: 73Mb L: 34/35 MS: 1 ShuffleBytes- 00:06:36.668 [2024-07-25 11:53:13.873366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:0a001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.873395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.873504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19e70011 cdw11:e700e6e6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.873522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.873620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002c19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.873636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.873742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.873759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.668 #38 NEW cov: 12224 ft: 14616 corp: 21/616b lim: 35 exec/s: 38 rss: 73Mb L: 30/35 MS: 1 ChangeBinInt- 00:06:36.668 [2024-07-25 11:53:13.922912] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:36.668 [2024-07-25 11:53:13.923907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.923939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.924039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.924057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.924153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00190000 cdw11:1900192c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.924172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.924272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.924288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.668 [2024-07-25 11:53:13.924383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.668 [2024-07-25 11:53:13.924399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.668 #39 NEW cov: 12233 ft: 14646 corp: 22/651b lim: 35 exec/s: 39 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:36.927 [2024-07-25 11:53:13.973728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000e cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.927 [2024-07-25 11:53:13.973779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.927 [2024-07-25 11:53:13.973887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:13.973906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:13.974007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:13.974027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:13.974128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:13.974146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.928 #40 NEW cov: 12233 ft: 14674 corp: 23/681b lim: 35 exec/s: 40 rss: 73Mb L: 30/35 MS: 1 EraseBytes- 00:06:36.928 [2024-07-25 11:53:14.023871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.023902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.024010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:09190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.024027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.024123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.024140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.024239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.024257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.928 #41 NEW cov: 12233 ft: 14699 corp: 24/715b lim: 35 exec/s: 41 rss: 73Mb L: 34/35 MS: 1 ChangeBit- 00:06:36.928 [2024-07-25 11:53:14.073428] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:36.928 [2024-07-25 11:53:14.074448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.074479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.074582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.074600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.074697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00190000 cdw11:1900192c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.074717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.074824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.074840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.074933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.074950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.928 #42 NEW cov: 12233 ft: 14711 corp: 25/750b lim: 35 exec/s: 42 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:36.928 [2024-07-25 11:53:14.134374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.134403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.134499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:09190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.134515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.134614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.134631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.134733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.134752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.928 #43 NEW cov: 12233 ft: 14751 corp: 26/784b lim: 35 exec/s: 43 rss: 73Mb L: 34/35 MS: 1 CrossOver- 00:06:36.928 [2024-07-25 11:53:14.194574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.194602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.194701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:09190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.194722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.194849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.194866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.928 [2024-07-25 11:53:14.194961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.928 [2024-07-25 11:53:14.194977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.928 #44 NEW cov: 12233 ft: 14752 corp: 27/818b lim: 35 exec/s: 44 rss: 73Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:37.187 [2024-07-25 11:53:14.254833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:0a001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.187 [2024-07-25 11:53:14.254861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.187 [2024-07-25 11:53:14.254967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190011 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.254984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.255087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19002c19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.255103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.255210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:2c001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.255230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.188 #45 NEW cov: 12233 ft: 14788 corp: 28/849b lim: 35 exec/s: 45 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:06:37.188 [2024-07-25 11:53:14.305089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000e cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.305117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.305219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:196f0019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.305238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.305337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.305353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.305445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.305461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.188 #46 NEW cov: 12233 ft: 14798 corp: 29/879b lim: 35 exec/s: 46 rss: 74Mb L: 30/35 MS: 1 ChangeByte- 00:06:37.188 [2024-07-25 11:53:14.364917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000e cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.364945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.365039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.365059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.365150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.365166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.188 #47 NEW cov: 12233 ft: 14801 corp: 30/902b lim: 35 exec/s: 47 rss: 74Mb L: 23/35 MS: 1 EraseBytes- 00:06:37.188 [2024-07-25 11:53:14.424840] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:37.188 [2024-07-25 11:53:14.425790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1919000a cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.425820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.425919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:19190019 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.425937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.426041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00190000 cdw11:1900192c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.426060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.426156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.426174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.188 [2024-07-25 11:53:14.426269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:19190019 cdw11:19001919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.188 [2024-07-25 11:53:14.426284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.188 #48 NEW cov: 12233 ft: 14828 corp: 31/937b lim: 35 exec/s: 24 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:37.188 #48 DONE cov: 12233 ft: 14828 corp: 31/937b lim: 35 exec/s: 24 rss: 74Mb 00:06:37.188 Done 48 runs in 2 second(s) 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.448 11:53:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:37.448 [2024-07-25 11:53:14.622422] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:37.448 [2024-07-25 11:53:14.622509] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903168 ] 00:06:37.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.708 [2024-07-25 11:53:14.915020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.708 [2024-07-25 11:53:15.009692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.967 [2024-07-25 11:53:15.069162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.967 [2024-07-25 11:53:15.085470] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:37.967 INFO: Running with entropic power schedule (0xFF, 100). 00:06:37.967 INFO: Seed: 2421073317 00:06:37.967 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:37.967 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:37.967 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:37.967 INFO: A corpus is not provided, starting from an empty corpus 00:06:37.967 #2 INITED exec/s: 0 rss: 64Mb 00:06:37.967 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:37.967 This may also happen if the target rejected all inputs we tried so far 00:06:38.226 NEW_FUNC[1/689]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:38.226 NEW_FUNC[2/689]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:38.226 #4 NEW cov: 11888 ft: 11888 corp: 2/15b lim: 20 exec/s: 0 rss: 72Mb L: 14/14 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:38.226 #8 NEW cov: 12024 ft: 12761 corp: 3/20b lim: 20 exec/s: 0 rss: 72Mb L: 5/14 MS: 4 ChangeByte-CopyPart-ChangeBit-CrossOver- 00:06:38.484 #13 NEW cov: 12030 ft: 13037 corp: 4/26b lim: 20 exec/s: 0 rss: 72Mb L: 6/14 MS: 5 ShuffleBytes-ChangeByte-ChangeByte-ShuffleBytes-CrossOver- 00:06:38.484 #14 NEW cov: 12116 ft: 13442 corp: 5/36b lim: 20 exec/s: 0 rss: 72Mb L: 10/14 MS: 1 EraseBytes- 00:06:38.484 #15 NEW cov: 12133 ft: 13701 corp: 6/52b lim: 20 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:06:38.484 #16 NEW cov: 12133 ft: 13824 corp: 7/58b lim: 20 exec/s: 0 rss: 72Mb L: 6/16 MS: 1 ChangeBit- 00:06:38.743 #17 NEW cov: 12133 ft: 13893 corp: 8/66b lim: 20 exec/s: 0 rss: 72Mb L: 8/16 MS: 1 EraseBytes- 00:06:38.743 #18 NEW cov: 12133 ft: 13987 corp: 9/71b lim: 20 exec/s: 0 rss: 72Mb L: 5/16 MS: 1 CMP- DE: "\377\001"- 00:06:38.743 #24 NEW cov: 12133 ft: 14009 corp: 10/83b lim: 20 exec/s: 0 rss: 72Mb L: 12/16 MS: 1 PersAutoDict- DE: "\377\001"- 00:06:38.743 #25 NEW cov: 12133 ft: 14054 corp: 11/88b lim: 20 exec/s: 0 rss: 72Mb L: 5/16 MS: 1 ChangeBit- 00:06:38.743 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:38.743 #26 NEW cov: 12156 ft: 14104 corp: 12/95b lim: 20 exec/s: 0 rss: 73Mb L: 7/16 MS: 1 PersAutoDict- DE: "\377\001"- 00:06:39.000 #27 NEW cov: 12156 ft: 14124 corp: 13/104b lim: 20 exec/s: 0 rss: 73Mb L: 9/16 MS: 1 InsertRepeatedBytes- 00:06:39.000 #28 NEW cov: 12156 ft: 14156 corp: 14/109b lim: 20 exec/s: 28 rss: 73Mb L: 5/16 MS: 1 ShuffleBytes- 00:06:39.001 #29 NEW cov: 12156 ft: 14171 corp: 15/124b lim: 20 exec/s: 29 rss: 73Mb L: 15/16 MS: 1 InsertByte- 00:06:39.001 #30 NEW cov: 12156 ft: 14193 corp: 16/135b lim: 20 exec/s: 30 rss: 73Mb L: 11/16 MS: 1 InsertByte- 00:06:39.001 #31 NEW cov: 12156 ft: 14219 corp: 17/141b lim: 20 exec/s: 31 rss: 73Mb L: 6/16 MS: 1 ChangeBinInt- 00:06:39.259 #32 NEW cov: 12156 ft: 14226 corp: 18/159b lim: 20 exec/s: 32 rss: 73Mb L: 18/18 MS: 1 CrossOver- 00:06:39.259 #33 NEW cov: 12156 ft: 14233 corp: 19/164b lim: 20 exec/s: 33 rss: 73Mb L: 5/18 MS: 1 CopyPart- 00:06:39.259 #34 NEW cov: 12156 ft: 14263 corp: 20/171b lim: 20 exec/s: 34 rss: 73Mb L: 7/18 MS: 1 ChangeByte- 00:06:39.259 #35 NEW cov: 12156 ft: 14279 corp: 21/178b lim: 20 exec/s: 35 rss: 73Mb L: 7/18 MS: 1 ShuffleBytes- 00:06:39.518 #36 NEW cov: 12156 ft: 14335 corp: 22/195b lim: 20 exec/s: 36 rss: 73Mb L: 17/18 MS: 1 InsertRepeatedBytes- 00:06:39.518 #37 NEW cov: 12156 ft: 14357 corp: 23/201b lim: 20 exec/s: 37 rss: 73Mb L: 6/18 MS: 1 ChangeByte- 00:06:39.518 #38 NEW cov: 12156 ft: 14394 corp: 24/218b lim: 20 exec/s: 38 rss: 73Mb L: 17/18 MS: 1 InsertRepeatedBytes- 00:06:39.518 #39 NEW cov: 12156 ft: 14416 corp: 25/224b lim: 20 exec/s: 39 rss: 73Mb L: 6/18 MS: 1 InsertByte- 00:06:39.518 #40 NEW cov: 12156 ft: 14419 corp: 26/231b lim: 20 exec/s: 40 rss: 73Mb L: 7/18 MS: 1 PersAutoDict- DE: "\377\001"- 00:06:39.778 #46 NEW cov: 12156 ft: 14461 corp: 27/239b lim: 20 exec/s: 46 rss: 73Mb L: 8/18 MS: 1 EraseBytes- 00:06:39.778 #47 NEW cov: 12156 ft: 14467 corp: 28/245b lim: 20 exec/s: 47 rss: 73Mb L: 6/18 MS: 1 InsertByte- 00:06:39.778 #48 NEW cov: 12156 ft: 14478 corp: 29/249b lim: 20 exec/s: 48 rss: 73Mb L: 4/18 MS: 1 EraseBytes- 00:06:39.778 #49 NEW cov: 12156 ft: 14554 corp: 30/266b lim: 20 exec/s: 49 rss: 73Mb L: 17/18 MS: 1 CMP- DE: "\3470\236*\306\015\032\000"- 00:06:40.037 #50 NEW cov: 12156 ft: 14565 corp: 31/283b lim: 20 exec/s: 50 rss: 74Mb L: 17/18 MS: 1 ChangeByte- 00:06:40.037 #51 NEW cov: 12156 ft: 14571 corp: 32/287b lim: 20 exec/s: 25 rss: 74Mb L: 4/18 MS: 1 EraseBytes- 00:06:40.037 #51 DONE cov: 12156 ft: 14571 corp: 32/287b lim: 20 exec/s: 25 rss: 74Mb 00:06:40.037 ###### Recommended dictionary. ###### 00:06:40.037 "\377\001" # Uses: 3 00:06:40.037 "\3470\236*\306\015\032\000" # Uses: 0 00:06:40.037 ###### End of recommended dictionary. ###### 00:06:40.037 Done 51 runs in 2 second(s) 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.037 11:53:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:40.037 [2024-07-25 11:53:17.326431] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:40.037 [2024-07-25 11:53:17.326525] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903543 ] 00:06:40.296 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.556 [2024-07-25 11:53:17.649445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.556 [2024-07-25 11:53:17.744357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.556 [2024-07-25 11:53:17.803698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.556 [2024-07-25 11:53:17.820008] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:40.556 INFO: Running with entropic power schedule (0xFF, 100). 00:06:40.556 INFO: Seed: 862100339 00:06:40.814 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:40.815 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:40.815 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:40.815 INFO: A corpus is not provided, starting from an empty corpus 00:06:40.815 #2 INITED exec/s: 0 rss: 64Mb 00:06:40.815 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:40.815 This may also happen if the target rejected all inputs we tried so far 00:06:40.815 [2024-07-25 11:53:17.898267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.815 [2024-07-25 11:53:17.898315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.815 [2024-07-25 11:53:17.898430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.815 [2024-07-25 11:53:17.898454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.815 [2024-07-25 11:53:17.898576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.815 [2024-07-25 11:53:17.898596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.074 NEW_FUNC[1/701]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:41.074 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.074 #12 NEW cov: 12001 ft: 12002 corp: 2/22b lim: 35 exec/s: 0 rss: 72Mb L: 21/21 MS: 5 ChangeByte-InsertByte-CrossOver-ChangeBit-InsertRepeatedBytes- 00:06:41.074 [2024-07-25 11:53:18.268748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:59720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.074 [2024-07-25 11:53:18.268791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.074 [2024-07-25 11:53:18.268890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.074 [2024-07-25 11:53:18.268907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.074 [2024-07-25 11:53:18.269011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.074 [2024-07-25 11:53:18.269026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.074 #13 NEW cov: 12131 ft: 12510 corp: 3/43b lim: 35 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 ChangeByte- 00:06:41.074 [2024-07-25 11:53:18.338843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.074 [2024-07-25 11:53:18.338871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.074 [2024-07-25 11:53:18.338963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.074 [2024-07-25 11:53:18.338979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.074 [2024-07-25 11:53:18.339078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.074 [2024-07-25 11:53:18.339093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.074 #19 NEW cov: 12137 ft: 12813 corp: 4/67b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CopyPart- 00:06:41.364 [2024-07-25 11:53:18.389111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.389142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.389238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.389254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.389349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.389367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.364 #20 NEW cov: 12222 ft: 12998 corp: 5/91b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeByte- 00:06:41.364 [2024-07-25 11:53:18.449329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.449356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.449445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.449460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.449551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.449566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.364 #21 NEW cov: 12222 ft: 13248 corp: 6/115b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CopyPart- 00:06:41.364 [2024-07-25 11:53:18.509455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.509481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.509572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72f27272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.509587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.509670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.509686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.364 #22 NEW cov: 12222 ft: 13361 corp: 7/139b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeBit- 00:06:41.364 [2024-07-25 11:53:18.559717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.559747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.559844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.559859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.559958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:56727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.559976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.364 #23 NEW cov: 12222 ft: 13443 corp: 8/163b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeByte- 00:06:41.364 [2024-07-25 11:53:18.609851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.609878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.609968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.609983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.610075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.610091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.364 #24 NEW cov: 12222 ft: 13501 corp: 9/184b lim: 35 exec/s: 0 rss: 72Mb L: 21/24 MS: 1 CopyPart- 00:06:41.364 [2024-07-25 11:53:18.660052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.660079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.660177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00007272 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.660193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.364 [2024-07-25 11:53:18.660288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:15720000 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.364 [2024-07-25 11:53:18.660304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.623 #30 NEW cov: 12222 ft: 13582 corp: 10/205b lim: 35 exec/s: 0 rss: 72Mb L: 21/24 MS: 1 ChangeBinInt- 00:06:41.623 [2024-07-25 11:53:18.710718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.623 [2024-07-25 11:53:18.710752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.623 [2024-07-25 11:53:18.710857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.623 [2024-07-25 11:53:18.710874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.623 [2024-07-25 11:53:18.710971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.623 [2024-07-25 11:53:18.710987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.623 [2024-07-25 11:53:18.711082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.623 [2024-07-25 11:53:18.711099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.623 #31 NEW cov: 12222 ft: 13990 corp: 11/237b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:06:41.623 [2024-07-25 11:53:18.760374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.623 [2024-07-25 11:53:18.760406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.624 [2024-07-25 11:53:18.760505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72f27272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.760521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.624 [2024-07-25 11:53:18.760618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.760633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.624 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:41.624 #32 NEW cov: 12239 ft: 14032 corp: 12/261b lim: 35 exec/s: 0 rss: 72Mb L: 24/32 MS: 1 CrossOver- 00:06:41.624 [2024-07-25 11:53:18.830581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:12127212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.830608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.624 [2024-07-25 11:53:18.830700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.830715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.624 [2024-07-25 11:53:18.830815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.830831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.624 #37 NEW cov: 12239 ft: 14069 corp: 13/287b lim: 35 exec/s: 0 rss: 72Mb L: 26/32 MS: 5 ShuffleBytes-ShuffleBytes-CopyPart-CrossOver-InsertRepeatedBytes- 00:06:41.624 [2024-07-25 11:53:18.880776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:59720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.880803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.624 [2024-07-25 11:53:18.880893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.880909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.624 [2024-07-25 11:53:18.880994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.624 [2024-07-25 11:53:18.881010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.624 #38 NEW cov: 12239 ft: 14108 corp: 14/308b lim: 35 exec/s: 38 rss: 73Mb L: 21/32 MS: 1 ChangeBit- 00:06:41.883 [2024-07-25 11:53:18.941021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.883 [2024-07-25 11:53:18.941048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.883 [2024-07-25 11:53:18.941146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00007272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.883 [2024-07-25 11:53:18.941162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.883 [2024-07-25 11:53:18.941252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.883 [2024-07-25 11:53:18.941268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.883 #39 NEW cov: 12239 ft: 14130 corp: 15/329b lim: 35 exec/s: 39 rss: 73Mb L: 21/32 MS: 1 CrossOver- 00:06:41.883 [2024-07-25 11:53:19.001557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.883 [2024-07-25 11:53:19.001585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.883 [2024-07-25 11:53:19.001680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.001696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.001791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.001805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.001897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:72724a72 cdw11:72460002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.001912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.884 #40 NEW cov: 12239 ft: 14152 corp: 16/358b lim: 35 exec/s: 40 rss: 73Mb L: 29/32 MS: 1 CopyPart- 00:06:41.884 [2024-07-25 11:53:19.071454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.071480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.071576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.071592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.071682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.071698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.884 #41 NEW cov: 12239 ft: 14184 corp: 17/382b lim: 35 exec/s: 41 rss: 73Mb L: 24/32 MS: 1 ShuffleBytes- 00:06:41.884 [2024-07-25 11:53:19.121662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.121688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.121786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.121801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.121897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:46250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.121914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.884 #42 NEW cov: 12239 ft: 14244 corp: 18/403b lim: 35 exec/s: 42 rss: 73Mb L: 21/32 MS: 1 ChangeByte- 00:06:41.884 [2024-07-25 11:53:19.182254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.182281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.182370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.182386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.182478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff72ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.182493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.884 [2024-07-25 11:53:19.182582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:46560002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.884 [2024-07-25 11:53:19.182597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.144 #43 NEW cov: 12239 ft: 14251 corp: 19/437b lim: 35 exec/s: 43 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:42.144 [2024-07-25 11:53:19.252466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.252494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.252588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.252604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.252694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.252712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.252801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:72721272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.252819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.144 #44 NEW cov: 12239 ft: 14266 corp: 20/467b lim: 35 exec/s: 44 rss: 73Mb L: 30/34 MS: 1 CrossOver- 00:06:42.144 [2024-07-25 11:53:19.302311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.302340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.302429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.302447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.302549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.302566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.144 #45 NEW cov: 12239 ft: 14290 corp: 21/488b lim: 35 exec/s: 45 rss: 73Mb L: 21/34 MS: 1 ChangeASCIIInt- 00:06:42.144 [2024-07-25 11:53:19.352210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:12127212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.352240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.352341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.352359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.144 #46 NEW cov: 12239 ft: 14537 corp: 22/508b lim: 35 exec/s: 46 rss: 73Mb L: 20/34 MS: 1 EraseBytes- 00:06:42.144 [2024-07-25 11:53:19.423167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.423197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.423299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.423316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.423414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72f27272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.423430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.144 [2024-07-25 11:53:19.423521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.144 [2024-07-25 11:53:19.423537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.403 #47 NEW cov: 12239 ft: 14564 corp: 23/539b lim: 35 exec/s: 47 rss: 73Mb L: 31/34 MS: 1 InsertRepeatedBytes- 00:06:42.403 [2024-07-25 11:53:19.493028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.493059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.493156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72320002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.493173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.493269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:46250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.493286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.403 #48 NEW cov: 12239 ft: 14575 corp: 24/560b lim: 35 exec/s: 48 rss: 73Mb L: 21/34 MS: 1 ChangeBit- 00:06:42.403 [2024-07-25 11:53:19.563186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.563215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.563320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00007272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.563340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.563439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a72724b cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.563456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.403 #49 NEW cov: 12239 ft: 14601 corp: 25/581b lim: 35 exec/s: 49 rss: 73Mb L: 21/34 MS: 1 ChangeBinInt- 00:06:42.403 [2024-07-25 11:53:19.623776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.623802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.623902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.623918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.624010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.624027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.624119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3272464a cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.624135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.403 #50 NEW cov: 12239 ft: 14615 corp: 26/611b lim: 35 exec/s: 50 rss: 73Mb L: 30/34 MS: 1 CopyPart- 00:06:42.403 [2024-07-25 11:53:19.674054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.674080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.674179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:724a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.674195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.674283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.674297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.403 [2024-07-25 11:53:19.674411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3272464a cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.403 [2024-07-25 11:53:19.674427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.664 #51 NEW cov: 12239 ft: 14637 corp: 27/641b lim: 35 exec/s: 51 rss: 74Mb L: 30/34 MS: 1 CrossOver- 00:06:42.664 [2024-07-25 11:53:19.744295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.744324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.744427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72f27272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.744447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.744532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:6e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.744549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.744645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:72720072 cdw11:46460000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.744663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.664 #52 NEW cov: 12246 ft: 14658 corp: 28/669b lim: 35 exec/s: 52 rss: 74Mb L: 28/34 MS: 1 CMP- DE: "n\000\000\000"- 00:06:42.664 [2024-07-25 11:53:19.794114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.794140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.794229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.794244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.794343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a727246 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.794358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.664 #53 NEW cov: 12246 ft: 14665 corp: 29/693b lim: 35 exec/s: 53 rss: 74Mb L: 24/34 MS: 1 ChangeBit- 00:06:42.664 [2024-07-25 11:53:19.855174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.855199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.855297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.855313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.855400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff7272 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.855416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.855506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff460002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.855522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.664 [2024-07-25 11:53:19.855620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:464a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.664 [2024-07-25 11:53:19.855635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.664 #54 NEW cov: 12246 ft: 14725 corp: 30/728b lim: 35 exec/s: 27 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:42.664 #54 DONE cov: 12246 ft: 14725 corp: 30/728b lim: 35 exec/s: 27 rss: 74Mb 00:06:42.664 ###### Recommended dictionary. ###### 00:06:42.664 "n\000\000\000" # Uses: 0 00:06:42.664 ###### End of recommended dictionary. ###### 00:06:42.664 Done 54 runs in 2 second(s) 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.924 11:53:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:42.924 [2024-07-25 11:53:20.067368] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:42.924 [2024-07-25 11:53:20.067458] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903918 ] 00:06:42.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.184 [2024-07-25 11:53:20.281904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.184 [2024-07-25 11:53:20.354207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.184 [2024-07-25 11:53:20.414151] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.184 [2024-07-25 11:53:20.430449] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:43.184 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.184 INFO: Seed: 3471098804 00:06:43.184 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:43.184 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:43.184 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:43.184 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.184 #2 INITED exec/s: 0 rss: 65Mb 00:06:43.184 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.184 This may also happen if the target rejected all inputs we tried so far 00:06:43.443 [2024-07-25 11:53:20.495788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.443 [2024-07-25 11:53:20.495818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.703 NEW_FUNC[1/701]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:43.703 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:43.703 #15 NEW cov: 12006 ft: 11978 corp: 2/12b lim: 45 exec/s: 0 rss: 72Mb L: 11/11 MS: 3 CrossOver-ChangeBit-InsertRepeatedBytes- 00:06:43.703 [2024-07-25 11:53:20.847023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.703 [2024-07-25 11:53:20.847083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.703 [2024-07-25 11:53:20.847175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.703 [2024-07-25 11:53:20.847202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.703 #25 NEW cov: 12142 ft: 13325 corp: 3/35b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 5 EraseBytes-ChangeByte-CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:43.703 [2024-07-25 11:53:20.906888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff183dc5 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.703 [2024-07-25 11:53:20.906915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.703 [2024-07-25 11:53:20.906971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.703 [2024-07-25 11:53:20.906986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.703 #26 NEW cov: 12148 ft: 13732 corp: 4/59b lim: 45 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 InsertByte- 00:06:43.703 [2024-07-25 11:53:20.957348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff183dc5 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.703 [2024-07-25 11:53:20.957377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.703 [2024-07-25 11:53:20.957432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.703 [2024-07-25 11:53:20.957446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.703 [2024-07-25 11:53:20.957498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.704 [2024-07-25 11:53:20.957512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.704 [2024-07-25 11:53:20.957565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.704 [2024-07-25 11:53:20.957578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.704 #27 NEW cov: 12233 ft: 14259 corp: 5/102b lim: 45 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 CopyPart- 00:06:43.964 [2024-07-25 11:53:21.017539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.017565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.017639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.017656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.017710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1818ff18 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.017724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.017780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.017794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.964 #28 NEW cov: 12233 ft: 14403 corp: 6/142b lim: 45 exec/s: 0 rss: 72Mb L: 40/43 MS: 1 CopyPart- 00:06:43.964 [2024-07-25 11:53:21.057336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.057361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.057414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.057428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.964 #29 NEW cov: 12233 ft: 14502 corp: 7/165b lim: 45 exec/s: 0 rss: 72Mb L: 23/43 MS: 1 ShuffleBytes- 00:06:43.964 [2024-07-25 11:53:21.097492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.097516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.097572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.097586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.964 #30 NEW cov: 12233 ft: 14591 corp: 8/188b lim: 45 exec/s: 0 rss: 72Mb L: 23/43 MS: 1 ChangeBit- 00:06:43.964 [2024-07-25 11:53:21.147705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.147731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.147792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3dff1818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.147807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.147862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:18181818 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.147876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.964 #31 NEW cov: 12233 ft: 14827 corp: 9/215b lim: 45 exec/s: 0 rss: 73Mb L: 27/43 MS: 1 CopyPart- 00:06:43.964 [2024-07-25 11:53:21.188002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.188028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.188083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3dff1818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.188100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.188153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:18181818 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.188166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.188218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:183d1818 cdw11:ff180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.188231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.964 #32 NEW cov: 12233 ft: 14878 corp: 10/256b lim: 45 exec/s: 0 rss: 73Mb L: 41/43 MS: 1 CrossOver- 00:06:43.964 [2024-07-25 11:53:21.238159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.238184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.238237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.238251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.238302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1818ff18 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.238316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.964 [2024-07-25 11:53:21.238368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.964 [2024-07-25 11:53:21.238381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.224 #33 NEW cov: 12233 ft: 14919 corp: 11/296b lim: 45 exec/s: 0 rss: 73Mb L: 40/43 MS: 1 ShuffleBytes- 00:06:44.224 [2024-07-25 11:53:21.287981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.288005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.288057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.288071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.224 #34 NEW cov: 12233 ft: 14934 corp: 12/319b lim: 45 exec/s: 0 rss: 73Mb L: 23/43 MS: 1 CopyPart- 00:06:44.224 [2024-07-25 11:53:21.328231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff183d3d cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.328256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.328309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.328323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.328376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:181818ff cdw11:ff180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.328393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.224 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:44.224 #35 NEW cov: 12256 ft: 14971 corp: 13/350b lim: 45 exec/s: 0 rss: 73Mb L: 31/43 MS: 1 CrossOver- 00:06:44.224 [2024-07-25 11:53:21.388280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.388306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.388361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.388375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.224 #36 NEW cov: 12256 ft: 14996 corp: 14/368b lim: 45 exec/s: 0 rss: 73Mb L: 18/43 MS: 1 EraseBytes- 00:06:44.224 [2024-07-25 11:53:21.428512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff183d3d cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.428536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.428591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.428604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.428657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:82821882 cdw11:82ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.428671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.224 #37 NEW cov: 12256 ft: 15000 corp: 15/403b lim: 45 exec/s: 0 rss: 73Mb L: 35/43 MS: 1 InsertRepeatedBytes- 00:06:44.224 [2024-07-25 11:53:21.478647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff183d3d cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.478672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.478727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181918 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.478745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.478798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:181818ff cdw11:ff180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.478812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.224 #38 NEW cov: 12256 ft: 15014 corp: 16/434b lim: 45 exec/s: 38 rss: 73Mb L: 31/43 MS: 1 ChangeBit- 00:06:44.224 [2024-07-25 11:53:21.518961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.518986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.519042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.519058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.519110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:181818ff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.519123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.224 [2024-07-25 11:53:21.519175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.224 [2024-07-25 11:53:21.519188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.484 #39 NEW cov: 12256 ft: 15020 corp: 17/475b lim: 45 exec/s: 39 rss: 73Mb L: 41/43 MS: 1 InsertByte- 00:06:44.484 [2024-07-25 11:53:21.558704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff183db1 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.558729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.484 [2024-07-25 11:53:21.558789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.558803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.484 #40 NEW cov: 12256 ft: 15067 corp: 18/499b lim: 45 exec/s: 40 rss: 73Mb L: 24/43 MS: 1 InsertByte- 00:06:44.484 [2024-07-25 11:53:21.608859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.608885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.484 [2024-07-25 11:53:21.608957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.608971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.484 #41 NEW cov: 12256 ft: 15095 corp: 19/522b lim: 45 exec/s: 41 rss: 73Mb L: 23/43 MS: 1 CopyPart- 00:06:44.484 [2024-07-25 11:53:21.649295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.649320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.484 [2024-07-25 11:53:21.649373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.649387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.484 [2024-07-25 11:53:21.649441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:181818ff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.649455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.484 [2024-07-25 11:53:21.649511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.649525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.484 #42 NEW cov: 12256 ft: 15160 corp: 20/566b lim: 45 exec/s: 42 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:06:44.484 [2024-07-25 11:53:21.699273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.699304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.484 [2024-07-25 11:53:21.699358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.484 [2024-07-25 11:53:21.699372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.484 [2024-07-25 11:53:21.699425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.485 [2024-07-25 11:53:21.699438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.485 #43 NEW cov: 12256 ft: 15190 corp: 21/600b lim: 45 exec/s: 43 rss: 73Mb L: 34/44 MS: 1 InsertRepeatedBytes- 00:06:44.485 [2024-07-25 11:53:21.739207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1818b1ff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.485 [2024-07-25 11:53:21.739233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.485 [2024-07-25 11:53:21.739303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.485 [2024-07-25 11:53:21.739317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.485 #44 NEW cov: 12256 ft: 15204 corp: 22/623b lim: 45 exec/s: 44 rss: 73Mb L: 23/44 MS: 1 EraseBytes- 00:06:44.744 [2024-07-25 11:53:21.789372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.789399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:21.789453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18e91818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.789467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.744 #45 NEW cov: 12256 ft: 15246 corp: 23/646b lim: 45 exec/s: 45 rss: 73Mb L: 23/44 MS: 1 ChangeByte- 00:06:44.744 [2024-07-25 11:53:21.829810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.829835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:21.829906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff1818 cdw11:ecff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.829921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:21.829976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.829990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:21.830044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.830058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.744 #46 NEW cov: 12256 ft: 15267 corp: 24/690b lim: 45 exec/s: 46 rss: 73Mb L: 44/44 MS: 1 CrossOver- 00:06:44.744 [2024-07-25 11:53:21.879562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.879587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:21.879645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.879658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.744 #47 NEW cov: 12256 ft: 15290 corp: 25/715b lim: 45 exec/s: 47 rss: 73Mb L: 25/44 MS: 1 EraseBytes- 00:06:44.744 [2024-07-25 11:53:21.919712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff8f3db1 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.919741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:21.919796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.919809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.744 #48 NEW cov: 12256 ft: 15307 corp: 26/739b lim: 45 exec/s: 48 rss: 73Mb L: 24/44 MS: 1 ChangeByte- 00:06:44.744 [2024-07-25 11:53:21.959825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:182d3dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.959851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:21.959906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18e91818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:21.959920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.744 #49 NEW cov: 12256 ft: 15382 corp: 27/762b lim: 45 exec/s: 49 rss: 73Mb L: 23/44 MS: 1 ChangeByte- 00:06:44.744 [2024-07-25 11:53:22.010333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:22.010360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:22.010416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff1818 cdw11:ecff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:22.010431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:22.010487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.744 [2024-07-25 11:53:22.010501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.744 [2024-07-25 11:53:22.010556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.745 [2024-07-25 11:53:22.010569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.004 #50 NEW cov: 12256 ft: 15411 corp: 28/806b lim: 45 exec/s: 50 rss: 73Mb L: 44/44 MS: 1 ChangeBit- 00:06:45.004 [2024-07-25 11:53:22.070422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18860004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.070448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.070505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:86868686 cdw11:86860004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.070519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.070575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:59188618 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.070589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.070642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.070656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.004 #51 NEW cov: 12256 ft: 15420 corp: 29/845b lim: 45 exec/s: 51 rss: 74Mb L: 39/44 MS: 1 InsertRepeatedBytes- 00:06:45.004 [2024-07-25 11:53:22.120582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff183d3d cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.120608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.120665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181918 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.120679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.120732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:181818ff cdw11:ff180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.120750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.120802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:59591859 cdw11:59590002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.120815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.004 #52 NEW cov: 12256 ft: 15464 corp: 30/886b lim: 45 exec/s: 52 rss: 74Mb L: 41/44 MS: 1 InsertRepeatedBytes- 00:06:45.004 [2024-07-25 11:53:22.170745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.170771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.170841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff1818 cdw11:ecff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.170857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.170912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.170926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.004 [2024-07-25 11:53:22.170981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:181f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.004 [2024-07-25 11:53:22.170995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.004 #53 NEW cov: 12256 ft: 15477 corp: 31/930b lim: 45 exec/s: 53 rss: 74Mb L: 44/44 MS: 1 ChangeBinInt- 00:06:45.004 [2024-07-25 11:53:22.210586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:1818b1ff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.210612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.005 [2024-07-25 11:53:22.210686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.210702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.005 #54 NEW cov: 12256 ft: 15484 corp: 32/953b lim: 45 exec/s: 54 rss: 74Mb L: 23/44 MS: 1 CopyPart- 00:06:45.005 [2024-07-25 11:53:22.261070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.261096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.005 [2024-07-25 11:53:22.261152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.261165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.005 [2024-07-25 11:53:22.261218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.261231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.005 [2024-07-25 11:53:22.261284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.261297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.005 #55 NEW cov: 12256 ft: 15497 corp: 33/994b lim: 45 exec/s: 55 rss: 74Mb L: 41/44 MS: 1 CrossOver- 00:06:45.005 [2024-07-25 11:53:22.301195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.301220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.005 [2024-07-25 11:53:22.301296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.301311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.005 [2024-07-25 11:53:22.301367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.301381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.005 [2024-07-25 11:53:22.301435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:181818ff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.005 [2024-07-25 11:53:22.301449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.265 #56 NEW cov: 12256 ft: 15506 corp: 34/1031b lim: 45 exec/s: 56 rss: 74Mb L: 37/44 MS: 1 InsertRepeatedBytes- 00:06:45.265 [2024-07-25 11:53:22.341287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.341316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.341372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff1818 cdw11:ecff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.341386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.341437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.341450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.341503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.341517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.265 #57 NEW cov: 12256 ft: 15512 corp: 35/1075b lim: 45 exec/s: 57 rss: 74Mb L: 44/44 MS: 1 ShuffleBytes- 00:06:45.265 [2024-07-25 11:53:22.381392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.381417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.381489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.381504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.381561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1818ff18 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.381575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.381629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.381643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.265 #58 NEW cov: 12256 ft: 15523 corp: 36/1119b lim: 45 exec/s: 58 rss: 74Mb L: 44/44 MS: 1 CrossOver- 00:06:45.265 [2024-07-25 11:53:22.421343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3dff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.421368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.421424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:18181818 cdw11:59180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.421438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.421492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:18181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.421507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.265 #59 NEW cov: 12256 ft: 15537 corp: 37/1151b lim: 45 exec/s: 59 rss: 74Mb L: 32/44 MS: 1 InsertRepeatedBytes- 00:06:45.265 [2024-07-25 11:53:22.461234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:18183dff cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.265 [2024-07-25 11:53:22.461261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.265 [2024-07-25 11:53:22.461316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:b7181818 cdw11:18180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.266 [2024-07-25 11:53:22.461330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.266 #60 NEW cov: 12256 ft: 15551 corp: 38/1174b lim: 45 exec/s: 30 rss: 74Mb L: 23/44 MS: 1 ChangeByte- 00:06:45.266 #60 DONE cov: 12256 ft: 15551 corp: 38/1174b lim: 45 exec/s: 30 rss: 74Mb 00:06:45.266 Done 60 runs in 2 second(s) 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.526 11:53:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:45.526 [2024-07-25 11:53:22.665097] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:45.526 [2024-07-25 11:53:22.665178] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904284 ] 00:06:45.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.785 [2024-07-25 11:53:22.985627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.785 [2024-07-25 11:53:23.071188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.044 [2024-07-25 11:53:23.131080] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.044 [2024-07-25 11:53:23.147365] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:46.044 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.044 INFO: Seed: 1892117968 00:06:46.044 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:46.044 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:46.044 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:46.044 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.044 #2 INITED exec/s: 0 rss: 65Mb 00:06:46.044 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.044 This may also happen if the target rejected all inputs we tried so far 00:06:46.044 [2024-07-25 11:53:23.195594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:46.044 [2024-07-25 11:53:23.195630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.303 NEW_FUNC[1/699]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:46.303 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:46.303 #3 NEW cov: 11947 ft: 11945 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:06:46.303 [2024-07-25 11:53:23.566512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:06:46.303 [2024-07-25 11:53:23.566563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 #4 NEW cov: 12060 ft: 12462 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:06:46.562 [2024-07-25 11:53:23.646591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c32f cdw11:00000000 00:06:46.562 [2024-07-25 11:53:23.646630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 #6 NEW cov: 12066 ft: 12746 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 EraseBytes-InsertByte- 00:06:46.562 [2024-07-25 11:53:23.726772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c32f cdw11:00000000 00:06:46.562 [2024-07-25 11:53:23.726806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 #7 NEW cov: 12151 ft: 12953 corp: 5/9b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:46.562 [2024-07-25 11:53:23.807120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c39e cdw11:00000000 00:06:46.562 [2024-07-25 11:53:23.807154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 [2024-07-25 11:53:23.807187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009e9e cdw11:00000000 00:06:46.562 [2024-07-25 11:53:23.807205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.562 [2024-07-25 11:53:23.807236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009e2f cdw11:00000000 00:06:46.562 [2024-07-25 11:53:23.807252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.821 #8 NEW cov: 12151 ft: 13265 corp: 6/15b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:46.821 [2024-07-25 11:53:23.887258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e6c3 cdw11:00000000 00:06:46.821 [2024-07-25 11:53:23.887289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.821 [2024-07-25 11:53:23.887336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009e9e cdw11:00000000 00:06:46.821 [2024-07-25 11:53:23.887352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.821 [2024-07-25 11:53:23.887379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009e9e cdw11:00000000 00:06:46.821 [2024-07-25 11:53:23.887403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.821 #12 NEW cov: 12151 ft: 13414 corp: 7/22b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 4 EraseBytes-ChangeByte-CopyPart-CrossOver- 00:06:46.821 [2024-07-25 11:53:23.947460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c32f cdw11:00000000 00:06:46.821 [2024-07-25 11:53:23.947492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.821 [2024-07-25 11:53:23.947539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007979 cdw11:00000000 00:06:46.821 [2024-07-25 11:53:23.947555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.821 [2024-07-25 11:53:23.947583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007979 cdw11:00000000 00:06:46.821 [2024-07-25 11:53:23.947599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.821 [2024-07-25 11:53:23.947627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007979 cdw11:00000000 00:06:46.821 [2024-07-25 11:53:23.947642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.821 #13 NEW cov: 12151 ft: 13682 corp: 8/30b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:46.821 [2024-07-25 11:53:24.007653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aaa cdw11:00000000 00:06:46.821 [2024-07-25 11:53:24.007685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.822 [2024-07-25 11:53:24.007731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.007754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.822 [2024-07-25 11:53:24.007782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.007798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.822 [2024-07-25 11:53:24.007825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.007841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.822 [2024-07-25 11:53:24.007869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.007885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.822 #14 NEW cov: 12151 ft: 13792 corp: 9/40b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:46.822 [2024-07-25 11:53:24.067663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c3c3 cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.067695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.822 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:46.822 #15 NEW cov: 12174 ft: 13860 corp: 10/42b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 CopyPart- 00:06:46.822 [2024-07-25 11:53:24.117924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e6c3 cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.117956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.822 [2024-07-25 11:53:24.118007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009e9e cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.118023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.822 [2024-07-25 11:53:24.118051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005b9e cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.118066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.822 [2024-07-25 11:53:24.118093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009e2f cdw11:00000000 00:06:46.822 [2024-07-25 11:53:24.118109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.081 #16 NEW cov: 12174 ft: 13979 corp: 11/50b lim: 10 exec/s: 16 rss: 73Mb L: 8/10 MS: 1 InsertByte- 00:06:47.081 [2024-07-25 11:53:24.198099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.198130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.198162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.198178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.198205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0f cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.198221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.081 #17 NEW cov: 12174 ft: 13998 corp: 12/56b lim: 10 exec/s: 17 rss: 73Mb L: 6/10 MS: 1 CMP- DE: "\377\377\377\017"- 00:06:47.081 [2024-07-25 11:53:24.248281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c324 cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.248313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.248344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007979 cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.248360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.248387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007979 cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.248403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.248430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007979 cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.248446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.081 #18 NEW cov: 12174 ft: 14018 corp: 13/64b lim: 10 exec/s: 18 rss: 73Mb L: 8/10 MS: 1 ChangeByte- 00:06:47.081 [2024-07-25 11:53:24.328493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c32f cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.328525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.328557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007979 cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.328572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.328600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f979 cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.328619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.328647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007979 cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.328662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.081 #19 NEW cov: 12174 ft: 14033 corp: 14/72b lim: 10 exec/s: 19 rss: 74Mb L: 8/10 MS: 1 ChangeBit- 00:06:47.081 [2024-07-25 11:53:24.378577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aaa cdw11:00000000 00:06:47.081 [2024-07-25 11:53:24.378609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.081 [2024-07-25 11:53:24.378656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:47.082 [2024-07-25 11:53:24.378672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.082 [2024-07-25 11:53:24.378701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:47.082 [2024-07-25 11:53:24.378717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.082 [2024-07-25 11:53:24.378752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:47.082 [2024-07-25 11:53:24.378769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.082 [2024-07-25 11:53:24.378797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:47.082 [2024-07-25 11:53:24.378813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.340 #20 NEW cov: 12174 ft: 14070 corp: 15/82b lim: 10 exec/s: 20 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:47.340 [2024-07-25 11:53:24.458649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:06:47.340 [2024-07-25 11:53:24.458681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 #21 NEW cov: 12174 ft: 14098 corp: 16/84b lim: 10 exec/s: 21 rss: 74Mb L: 2/10 MS: 1 CopyPart- 00:06:47.340 [2024-07-25 11:53:24.508770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003bc3 cdw11:00000000 00:06:47.340 [2024-07-25 11:53:24.508802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 #22 NEW cov: 12174 ft: 14130 corp: 17/86b lim: 10 exec/s: 22 rss: 74Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:47.340 [2024-07-25 11:53:24.589192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aaa cdw11:00000000 00:06:47.340 [2024-07-25 11:53:24.589225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 [2024-07-25 11:53:24.589257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:47.340 [2024-07-25 11:53:24.589273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.340 [2024-07-25 11:53:24.589301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:47.340 [2024-07-25 11:53:24.589316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.340 [2024-07-25 11:53:24.589344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000aaaa cdw11:00000000 00:06:47.340 [2024-07-25 11:53:24.589363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.340 [2024-07-25 11:53:24.589391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000aa0a cdw11:00000000 00:06:47.340 [2024-07-25 11:53:24.589406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.340 #23 NEW cov: 12174 ft: 14154 corp: 18/96b lim: 10 exec/s: 23 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:47.599 [2024-07-25 11:53:24.649218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.649250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.649282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.649298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.649326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0f cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.649341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.599 #24 NEW cov: 12174 ft: 14193 corp: 19/102b lim: 10 exec/s: 24 rss: 74Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:47.599 [2024-07-25 11:53:24.729452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c314 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.729486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.729534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.729551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.729579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.729595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.599 #26 NEW cov: 12174 ft: 14203 corp: 20/108b lim: 10 exec/s: 26 rss: 74Mb L: 6/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:47.599 [2024-07-25 11:53:24.789665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001c2 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.789699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.789732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000c2c2 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.789756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.789784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c2c2 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.789801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.789829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000c2c2 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.789846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.599 #28 NEW cov: 12174 ft: 14251 corp: 21/117b lim: 10 exec/s: 28 rss: 74Mb L: 9/10 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:06:47.599 [2024-07-25 11:53:24.839732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.839774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.839809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff0f cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.839825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.599 [2024-07-25 11:53:24.839853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:47.599 [2024-07-25 11:53:24.839869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.859 #29 NEW cov: 12174 ft: 14280 corp: 22/123b lim: 10 exec/s: 29 rss: 74Mb L: 6/10 MS: 1 PersAutoDict- DE: "\377\377\377\017"- 00:06:47.859 [2024-07-25 11:53:24.919828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c3bd cdw11:00000000 00:06:47.859 [2024-07-25 11:53:24.919861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.859 #30 NEW cov: 12174 ft: 14284 corp: 23/125b lim: 10 exec/s: 30 rss: 74Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:47.859 [2024-07-25 11:53:24.970064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:06:47.859 [2024-07-25 11:53:24.970099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:24.970130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000030ff cdw11:00000000 00:06:47.859 [2024-07-25 11:53:24.970146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:24.970173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0f cdw11:00000000 00:06:47.859 [2024-07-25 11:53:24.970189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.859 #31 NEW cov: 12174 ft: 14301 corp: 24/131b lim: 10 exec/s: 31 rss: 74Mb L: 6/10 MS: 1 ChangeByte- 00:06:47.859 [2024-07-25 11:53:25.020162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e6c3 cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.020195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:25.020227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009e9e cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.020243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:25.020271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009e00 cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.020286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.859 #32 NEW cov: 12174 ft: 14377 corp: 25/138b lim: 10 exec/s: 32 rss: 74Mb L: 7/10 MS: 1 ChangeByte- 00:06:47.859 [2024-07-25 11:53:25.080374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c379 cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.080407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:25.080438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002479 cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.080454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:25.080487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007979 cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.080502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:25.080530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007979 cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.080545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.859 #33 NEW cov: 12174 ft: 14447 corp: 26/146b lim: 10 exec/s: 33 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:06:47.859 [2024-07-25 11:53:25.160536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aaa cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.160570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.859 [2024-07-25 11:53:25.160602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000001aa cdw11:00000000 00:06:47.859 [2024-07-25 11:53:25.160618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.119 #34 NEW cov: 12174 ft: 14616 corp: 27/150b lim: 10 exec/s: 17 rss: 74Mb L: 4/10 MS: 1 CrossOver- 00:06:48.119 #34 DONE cov: 12174 ft: 14616 corp: 27/150b lim: 10 exec/s: 17 rss: 74Mb 00:06:48.119 ###### Recommended dictionary. ###### 00:06:48.119 "\377\377\377\017" # Uses: 1 00:06:48.119 ###### End of recommended dictionary. ###### 00:06:48.119 Done 34 runs in 2 second(s) 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.119 11:53:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:48.119 [2024-07-25 11:53:25.406395] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:48.119 [2024-07-25 11:53:25.406491] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904659 ] 00:06:48.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.637 [2024-07-25 11:53:25.709166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.637 [2024-07-25 11:53:25.803665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.637 [2024-07-25 11:53:25.863244] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.637 [2024-07-25 11:53:25.879546] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:48.637 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.637 INFO: Seed: 331163819 00:06:48.637 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:48.637 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:48.637 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:48.637 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.637 #2 INITED exec/s: 0 rss: 64Mb 00:06:48.637 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:48.637 This may also happen if the target rejected all inputs we tried so far 00:06:48.637 [2024-07-25 11:53:25.934226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:48.637 [2024-07-25 11:53:25.934261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.157 NEW_FUNC[1/699]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:49.157 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.157 #3 NEW cov: 11947 ft: 11946 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:06:49.157 [2024-07-25 11:53:26.305163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a41 cdw11:00000000 00:06:49.157 [2024-07-25 11:53:26.305213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.157 #4 NEW cov: 12060 ft: 12592 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeByte- 00:06:49.157 [2024-07-25 11:53:26.385334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a6e cdw11:00000000 00:06:49.157 [2024-07-25 11:53:26.385371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.157 [2024-07-25 11:53:26.385404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006e6e cdw11:00000000 00:06:49.157 [2024-07-25 11:53:26.385421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.157 [2024-07-25 11:53:26.385449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006e6e cdw11:00000000 00:06:49.157 [2024-07-25 11:53:26.385465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.157 #5 NEW cov: 12066 ft: 13030 corp: 4/12b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:49.157 [2024-07-25 11:53:26.445418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:49.157 [2024-07-25 11:53:26.445451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.157 [2024-07-25 11:53:26.445483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006e6e cdw11:00000000 00:06:49.157 [2024-07-25 11:53:26.445500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.416 #6 NEW cov: 12151 ft: 13434 corp: 5/17b lim: 10 exec/s: 0 rss: 72Mb L: 5/7 MS: 1 CrossOver- 00:06:49.416 [2024-07-25 11:53:26.535604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:49.416 [2024-07-25 11:53:26.535638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.416 #7 NEW cov: 12151 ft: 13619 corp: 6/19b lim: 10 exec/s: 0 rss: 72Mb L: 2/7 MS: 1 ChangeByte- 00:06:49.416 [2024-07-25 11:53:26.615845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2d cdw11:00000000 00:06:49.416 [2024-07-25 11:53:26.615896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.416 #8 NEW cov: 12151 ft: 13691 corp: 7/21b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 ChangeBit- 00:06:49.416 [2024-07-25 11:53:26.696027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:49.416 [2024-07-25 11:53:26.696062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.416 [2024-07-25 11:53:26.696093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfbf cdw11:00000000 00:06:49.416 [2024-07-25 11:53:26.696109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.675 #9 NEW cov: 12151 ft: 13738 corp: 8/26b lim: 10 exec/s: 0 rss: 73Mb L: 5/7 MS: 1 InsertRepeatedBytes- 00:06:49.675 [2024-07-25 11:53:26.756122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a0a cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.756154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.675 #10 NEW cov: 12151 ft: 13777 corp: 9/28b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 ChangeBit- 00:06:49.675 [2024-07-25 11:53:26.806312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.806344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.675 [2024-07-25 11:53:26.806375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfbf cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.806392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.675 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:49.675 #11 NEW cov: 12174 ft: 13822 corp: 10/33b lim: 10 exec/s: 0 rss: 73Mb L: 5/7 MS: 1 CrossOver- 00:06:49.675 [2024-07-25 11:53:26.886599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.886632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.675 [2024-07-25 11:53:26.886662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfbf cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.886681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.675 [2024-07-25 11:53:26.886709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.886726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.675 [2024-07-25 11:53:26.886778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.886797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.675 [2024-07-25 11:53:26.886832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ffbf cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.886850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.675 #12 NEW cov: 12174 ft: 14096 corp: 11/43b lim: 10 exec/s: 12 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:49.675 [2024-07-25 11:53:26.946621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3c cdw11:00000000 00:06:49.675 [2024-07-25 11:53:26.946654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.934 #13 NEW cov: 12174 ft: 14143 corp: 12/45b lim: 10 exec/s: 13 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:06:49.934 [2024-07-25 11:53:26.996904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:49.934 [2024-07-25 11:53:26.996936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.934 [2024-07-25 11:53:26.996968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfbf cdw11:00000000 00:06:49.934 [2024-07-25 11:53:26.996984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.934 [2024-07-25 11:53:26.997011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.934 [2024-07-25 11:53:26.997027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:26.997055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:49.935 [2024-07-25 11:53:26.997070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:26.997098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ffbf cdw11:00000000 00:06:49.935 [2024-07-25 11:53:26.997113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.935 #14 NEW cov: 12174 ft: 14178 corp: 13/55b lim: 10 exec/s: 14 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:49.935 [2024-07-25 11:53:27.076996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.077028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.077060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005bbf cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.077076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.935 #15 NEW cov: 12174 ft: 14202 corp: 14/60b lim: 10 exec/s: 15 rss: 73Mb L: 5/10 MS: 1 ChangeByte- 00:06:49.935 [2024-07-25 11:53:27.127251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.127283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.127314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfbf cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.127330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.127357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.127373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.127404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.127419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.127446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ffbb cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.127462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.935 #16 NEW cov: 12174 ft: 14256 corp: 15/70b lim: 10 exec/s: 16 rss: 73Mb L: 10/10 MS: 1 ChangeBit- 00:06:49.935 [2024-07-25 11:53:27.177383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.177415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.177446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.177462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.177490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.177505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.177532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.177548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.935 [2024-07-25 11:53:27.177576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000abf cdw11:00000000 00:06:49.935 [2024-07-25 11:53:27.177592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.194 #17 NEW cov: 12174 ft: 14279 corp: 16/80b lim: 10 exec/s: 17 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:50.194 [2024-07-25 11:53:27.257414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f6be cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.257446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.194 #18 NEW cov: 12174 ft: 14331 corp: 17/82b lim: 10 exec/s: 18 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:50.194 [2024-07-25 11:53:27.307498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3c cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.307528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.194 #19 NEW cov: 12174 ft: 14408 corp: 18/85b lim: 10 exec/s: 19 rss: 73Mb L: 3/10 MS: 1 CrossOver- 00:06:50.194 [2024-07-25 11:53:27.387746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.387778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.194 #20 NEW cov: 12174 ft: 14417 corp: 19/87b lim: 10 exec/s: 20 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:50.194 [2024-07-25 11:53:27.437971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a6e cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.438002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.194 [2024-07-25 11:53:27.438034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006e6e cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.438050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.194 [2024-07-25 11:53:27.438081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006c6e cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.438098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.194 #21 NEW cov: 12174 ft: 14468 corp: 20/94b lim: 10 exec/s: 21 rss: 73Mb L: 7/10 MS: 1 ChangeBit- 00:06:50.194 [2024-07-25 11:53:27.488098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f591 cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.488130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.194 [2024-07-25 11:53:27.488161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006e6e cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.488176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.194 [2024-07-25 11:53:27.488204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006c6e cdw11:00000000 00:06:50.194 [2024-07-25 11:53:27.488219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.453 #22 NEW cov: 12174 ft: 14473 corp: 21/101b lim: 10 exec/s: 22 rss: 73Mb L: 7/10 MS: 1 ChangeBinInt- 00:06:50.453 [2024-07-25 11:53:27.568271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a6e cdw11:00000000 00:06:50.453 [2024-07-25 11:53:27.568304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.453 [2024-07-25 11:53:27.568333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006e6e cdw11:00000000 00:06:50.453 [2024-07-25 11:53:27.568349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.453 [2024-07-25 11:53:27.568374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a6e cdw11:00000000 00:06:50.453 [2024-07-25 11:53:27.568389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.453 #23 NEW cov: 12174 ft: 14515 corp: 22/108b lim: 10 exec/s: 23 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:06:50.453 [2024-07-25 11:53:27.628334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003d3d cdw11:00000000 00:06:50.453 [2024-07-25 11:53:27.628366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.453 #24 NEW cov: 12174 ft: 14536 corp: 23/110b lim: 10 exec/s: 24 rss: 73Mb L: 2/10 MS: 1 CopyPart- 00:06:50.453 [2024-07-25 11:53:27.678463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:50.453 [2024-07-25 11:53:27.678495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.453 #25 NEW cov: 12174 ft: 14554 corp: 24/112b lim: 10 exec/s: 25 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:50.712 [2024-07-25 11:53:27.758954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.758992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.759026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.759042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.759071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.759091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.759120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.759135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.759164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.759180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.712 #27 NEW cov: 12174 ft: 14566 corp: 25/122b lim: 10 exec/s: 27 rss: 73Mb L: 10/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:50.712 [2024-07-25 11:53:27.819066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000b3d cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.819102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.819134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfbf cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.819151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.819179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.819195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.819223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.819238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.712 [2024-07-25 11:53:27.819265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ffbf cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.819281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.712 #28 NEW cov: 12174 ft: 14573 corp: 26/132b lim: 10 exec/s: 28 rss: 73Mb L: 10/10 MS: 1 ChangeBit- 00:06:50.712 [2024-07-25 11:53:27.879036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000216 cdw11:00000000 00:06:50.712 [2024-07-25 11:53:27.879071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.712 #31 NEW cov: 12174 ft: 14593 corp: 27/134b lim: 10 exec/s: 15 rss: 73Mb L: 2/10 MS: 3 ChangeBit-CopyPart-InsertByte- 00:06:50.712 #31 DONE cov: 12174 ft: 14593 corp: 27/134b lim: 10 exec/s: 15 rss: 73Mb 00:06:50.712 Done 31 runs in 2 second(s) 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:50.972 11:53:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:50.972 [2024-07-25 11:53:28.098278] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:50.972 [2024-07-25 11:53:28.098368] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905034 ] 00:06:50.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.231 [2024-07-25 11:53:28.418958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.231 [2024-07-25 11:53:28.513781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.489 [2024-07-25 11:53:28.574030] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.489 [2024-07-25 11:53:28.590329] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:51.489 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.489 INFO: Seed: 3042177608 00:06:51.489 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:51.489 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:51.489 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:51.489 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.489 [2024-07-25 11:53:28.655676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.489 [2024-07-25 11:53:28.655706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.489 #2 INITED cov: 11956 ft: 11957 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:51.490 [2024-07-25 11:53:28.695664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.490 [2024-07-25 11:53:28.695692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.490 #3 NEW cov: 12087 ft: 12590 corp: 2/2b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ChangeBinInt- 00:06:51.490 [2024-07-25 11:53:28.745803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.490 [2024-07-25 11:53:28.745831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.490 #4 NEW cov: 12093 ft: 12897 corp: 3/3b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 CopyPart- 00:06:51.748 [2024-07-25 11:53:28.796534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.796563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.748 [2024-07-25 11:53:28.796619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.796634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.748 [2024-07-25 11:53:28.796689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.796702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.748 [2024-07-25 11:53:28.796762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.796776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.748 [2024-07-25 11:53:28.796834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.796848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.748 #5 NEW cov: 12178 ft: 14010 corp: 4/8b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:51.748 [2024-07-25 11:53:28.836646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.836672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.748 [2024-07-25 11:53:28.836728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.836746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.748 [2024-07-25 11:53:28.836801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.748 [2024-07-25 11:53:28.836815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.836869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.836883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.836937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.836951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.749 #6 NEW cov: 12178 ft: 14088 corp: 5/13b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:06:51.749 [2024-07-25 11:53:28.886780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.886807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.886862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.886879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.886930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.886944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.886995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.887008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.887060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.887074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.749 #7 NEW cov: 12178 ft: 14146 corp: 6/18b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:06:51.749 [2024-07-25 11:53:28.946534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.946562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.946619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.946633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.749 #8 NEW cov: 12178 ft: 14380 corp: 7/20b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:06:51.749 [2024-07-25 11:53:28.986628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.986655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.749 [2024-07-25 11:53:28.986709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:28.986723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.749 #9 NEW cov: 12178 ft: 14437 corp: 8/22b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:51.749 [2024-07-25 11:53:29.026574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.749 [2024-07-25 11:53:29.026600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.749 #10 NEW cov: 12178 ft: 14482 corp: 9/23b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:52.008 [2024-07-25 11:53:29.066688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.066713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.008 #11 NEW cov: 12178 ft: 14517 corp: 10/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:52.008 [2024-07-25 11:53:29.117120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.117146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.008 [2024-07-25 11:53:29.117205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.117219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.008 [2024-07-25 11:53:29.117271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.117284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.008 #12 NEW cov: 12178 ft: 14681 corp: 11/27b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:52.008 [2024-07-25 11:53:29.167306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.167333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.008 [2024-07-25 11:53:29.167389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.167403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.008 [2024-07-25 11:53:29.167458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.167472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.008 #13 NEW cov: 12178 ft: 14715 corp: 12/30b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 ChangeByte- 00:06:52.008 [2024-07-25 11:53:29.217106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.217132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.008 #14 NEW cov: 12178 ft: 14729 corp: 13/31b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:52.008 [2024-07-25 11:53:29.267729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.267762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.008 [2024-07-25 11:53:29.267822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.267836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.008 [2024-07-25 11:53:29.267891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.267904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.008 [2024-07-25 11:53:29.267958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.008 [2024-07-25 11:53:29.267971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.008 #15 NEW cov: 12178 ft: 14764 corp: 14/35b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:52.267 [2024-07-25 11:53:29.317858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.317888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.317945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.317960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.318014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.318028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.318083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.318097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.267 #16 NEW cov: 12178 ft: 14766 corp: 15/39b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:52.267 [2024-07-25 11:53:29.357985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.358011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.358064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.358078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.358131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.358145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.358201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.358215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.267 #17 NEW cov: 12178 ft: 14816 corp: 16/43b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 CrossOver- 00:06:52.267 [2024-07-25 11:53:29.398265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.398292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.398346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.398361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.398416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.398430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.398484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.398501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.267 [2024-07-25 11:53:29.398556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.398570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.267 #18 NEW cov: 12178 ft: 14836 corp: 17/48b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:52.267 [2024-07-25 11:53:29.447747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.447772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.267 #19 NEW cov: 12178 ft: 14849 corp: 18/49b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:52.267 [2024-07-25 11:53:29.488321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.267 [2024-07-25 11:53:29.488347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.268 [2024-07-25 11:53:29.488405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.488419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.268 [2024-07-25 11:53:29.488473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.488487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.268 [2024-07-25 11:53:29.488542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.488556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.268 #20 NEW cov: 12178 ft: 14891 corp: 19/53b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertByte- 00:06:52.268 [2024-07-25 11:53:29.528586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.528612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.268 [2024-07-25 11:53:29.528668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.528682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.268 [2024-07-25 11:53:29.528740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.528753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.268 [2024-07-25 11:53:29.528835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.528849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.268 [2024-07-25 11:53:29.528901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.268 [2024-07-25 11:53:29.528918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.835 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:52.835 #21 NEW cov: 12201 ft: 14926 corp: 20/58b lim: 5 exec/s: 21 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:52.835 [2024-07-25 11:53:29.859457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.859522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.835 [2024-07-25 11:53:29.859605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.859632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.835 [2024-07-25 11:53:29.859710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.859744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.835 #22 NEW cov: 12201 ft: 14980 corp: 21/61b lim: 5 exec/s: 22 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:06:52.835 [2024-07-25 11:53:29.898876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.898905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.835 #23 NEW cov: 12201 ft: 15011 corp: 22/62b lim: 5 exec/s: 23 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:52.835 [2024-07-25 11:53:29.949520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.949549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.835 [2024-07-25 11:53:29.949605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.949620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.835 [2024-07-25 11:53:29.949675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.949689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.835 [2024-07-25 11:53:29.949747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:29.949761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.835 #24 NEW cov: 12201 ft: 15039 corp: 23/66b lim: 5 exec/s: 24 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:06:52.835 [2024-07-25 11:53:30.009533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.835 [2024-07-25 11:53:30.009561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.835 [2024-07-25 11:53:30.009618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.009636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.836 [2024-07-25 11:53:30.009692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.009707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.836 #25 NEW cov: 12201 ft: 15052 corp: 24/69b lim: 5 exec/s: 25 rss: 74Mb L: 3/5 MS: 1 CrossOver- 00:06:52.836 [2024-07-25 11:53:30.069826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.069857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.836 [2024-07-25 11:53:30.069912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.069926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.836 [2024-07-25 11:53:30.069982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.069996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.836 [2024-07-25 11:53:30.070051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.070064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.836 #26 NEW cov: 12201 ft: 15070 corp: 25/73b lim: 5 exec/s: 26 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:06:52.836 [2024-07-25 11:53:30.109718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.109751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.836 [2024-07-25 11:53:30.109812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.109826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.836 [2024-07-25 11:53:30.109881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.836 [2024-07-25 11:53:30.109894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.094 #27 NEW cov: 12201 ft: 15090 corp: 26/76b lim: 5 exec/s: 27 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:06:53.094 [2024-07-25 11:53:30.159917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.159945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.160002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.160016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.160071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.160087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.095 #28 NEW cov: 12201 ft: 15098 corp: 27/79b lim: 5 exec/s: 28 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:06:53.095 [2024-07-25 11:53:30.210028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.210055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.210111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.210125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.210178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.210192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.095 #29 NEW cov: 12201 ft: 15107 corp: 28/82b lim: 5 exec/s: 29 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:06:53.095 [2024-07-25 11:53:30.250205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.250231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.250285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.250300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.250353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.250366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.095 #30 NEW cov: 12201 ft: 15115 corp: 29/85b lim: 5 exec/s: 30 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:06:53.095 [2024-07-25 11:53:30.300451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.300478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.300535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.300549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.300602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.300616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.300670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.300683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.095 #31 NEW cov: 12201 ft: 15119 corp: 30/89b lim: 5 exec/s: 31 rss: 74Mb L: 4/5 MS: 1 InsertByte- 00:06:53.095 [2024-07-25 11:53:30.340711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.340743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.340815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.340829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.340884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.340897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.340953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.340967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.341021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.341035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.095 #32 NEW cov: 12201 ft: 15144 corp: 31/94b lim: 5 exec/s: 32 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:53.095 [2024-07-25 11:53:30.380754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.380781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.380838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.380852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.380907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.380921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.095 [2024-07-25 11:53:30.380976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.095 [2024-07-25 11:53:30.380990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.354 #33 NEW cov: 12201 ft: 15147 corp: 32/98b lim: 5 exec/s: 33 rss: 74Mb L: 4/5 MS: 1 ChangeBit- 00:06:53.354 [2024-07-25 11:53:30.441026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.354 [2024-07-25 11:53:30.441053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.354 [2024-07-25 11:53:30.441125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.354 [2024-07-25 11:53:30.441139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.354 [2024-07-25 11:53:30.441194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.354 [2024-07-25 11:53:30.441207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.354 [2024-07-25 11:53:30.441259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.354 [2024-07-25 11:53:30.441273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.354 [2024-07-25 11:53:30.441326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.354 [2024-07-25 11:53:30.441339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.354 #34 NEW cov: 12201 ft: 15151 corp: 33/103b lim: 5 exec/s: 34 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:06:53.354 [2024-07-25 11:53:30.500711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.354 [2024-07-25 11:53:30.500741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.354 [2024-07-25 11:53:30.500815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.500829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.355 #35 NEW cov: 12201 ft: 15170 corp: 34/105b lim: 5 exec/s: 35 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:06:53.355 [2024-07-25 11:53:30.551122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.551149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.355 [2024-07-25 11:53:30.551202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.551217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.355 [2024-07-25 11:53:30.551269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.551282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.355 [2024-07-25 11:53:30.551333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.551346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.355 #36 NEW cov: 12201 ft: 15176 corp: 35/109b lim: 5 exec/s: 36 rss: 74Mb L: 4/5 MS: 1 ChangeBinInt- 00:06:53.355 [2024-07-25 11:53:30.600813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.600840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.355 #37 NEW cov: 12201 ft: 15190 corp: 36/110b lim: 5 exec/s: 37 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:53.355 [2024-07-25 11:53:30.641212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.641241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.355 [2024-07-25 11:53:30.641295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.641309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.355 [2024-07-25 11:53:30.641360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.355 [2024-07-25 11:53:30.641374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.614 #38 NEW cov: 12201 ft: 15249 corp: 37/113b lim: 5 exec/s: 19 rss: 74Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:53.614 #38 DONE cov: 12201 ft: 15249 corp: 37/113b lim: 5 exec/s: 19 rss: 74Mb 00:06:53.614 Done 38 runs in 2 second(s) 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.614 11:53:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:53.614 [2024-07-25 11:53:30.847126] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:53.614 [2024-07-25 11:53:30.847204] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905400 ] 00:06:53.614 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.872 [2024-07-25 11:53:31.168399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.130 [2024-07-25 11:53:31.263715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.130 [2024-07-25 11:53:31.323303] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.130 [2024-07-25 11:53:31.339589] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:54.130 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.130 INFO: Seed: 1496199779 00:06:54.130 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:54.130 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:54.130 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:54.130 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.130 [2024-07-25 11:53:31.404981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.130 [2024-07-25 11:53:31.405011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.130 #2 INITED cov: 11950 ft: 11951 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:54.388 [2024-07-25 11:53:31.445014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.445041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.388 #3 NEW cov: 12087 ft: 12585 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 CrossOver- 00:06:54.388 [2024-07-25 11:53:31.495143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.495169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.388 #4 NEW cov: 12093 ft: 12917 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:54.388 [2024-07-25 11:53:31.545411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.545437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.388 [2024-07-25 11:53:31.545511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.545526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.388 #5 NEW cov: 12178 ft: 13860 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:54.388 [2024-07-25 11:53:31.585496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.585521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.388 [2024-07-25 11:53:31.585596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.585611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.388 #6 NEW cov: 12178 ft: 14029 corp: 5/7b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:54.388 [2024-07-25 11:53:31.635685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.635711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.388 [2024-07-25 11:53:31.635794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.635815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.388 #7 NEW cov: 12178 ft: 14117 corp: 6/9b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:54.388 [2024-07-25 11:53:31.675809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.675835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.388 [2024-07-25 11:53:31.675891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.388 [2024-07-25 11:53:31.675906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.647 #8 NEW cov: 12178 ft: 14176 corp: 7/11b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:54.647 [2024-07-25 11:53:31.725974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.726001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.726076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.726091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.647 #9 NEW cov: 12178 ft: 14229 corp: 8/13b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:06:54.647 [2024-07-25 11:53:31.766517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.766544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.766602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.766617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.766675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.766688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.766746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.766761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.766818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.766831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.647 #10 NEW cov: 12178 ft: 14620 corp: 9/18b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:54.647 [2024-07-25 11:53:31.806493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.806519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.806575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.806592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.806649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.806663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.806719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.806733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.647 #11 NEW cov: 12178 ft: 14658 corp: 10/22b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 CopyPart- 00:06:54.647 [2024-07-25 11:53:31.856754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.856781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.856856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.856871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.856929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.856943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.856997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.857010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.647 [2024-07-25 11:53:31.857069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.857082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.647 #12 NEW cov: 12178 ft: 14688 corp: 11/27b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:54.647 [2024-07-25 11:53:31.916245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.647 [2024-07-25 11:53:31.916273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.647 #13 NEW cov: 12178 ft: 14756 corp: 12/28b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ChangeByte- 00:06:54.906 [2024-07-25 11:53:31.956871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:31.956897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.906 [2024-07-25 11:53:31.956969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:31.956984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.906 [2024-07-25 11:53:31.957044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:31.957058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.906 [2024-07-25 11:53:31.957116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:31.957130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.906 #14 NEW cov: 12178 ft: 14876 corp: 13/32b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeBit- 00:06:54.906 [2024-07-25 11:53:32.016589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:32.016617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.906 #15 NEW cov: 12178 ft: 14943 corp: 14/33b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:54.906 [2024-07-25 11:53:32.066675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:32.066701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.906 #16 NEW cov: 12178 ft: 14957 corp: 15/34b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:54.906 [2024-07-25 11:53:32.106967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:32.106993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.906 [2024-07-25 11:53:32.107052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:32.107066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.906 #17 NEW cov: 12178 ft: 14984 corp: 16/36b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:06:54.906 [2024-07-25 11:53:32.146944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:32.146970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.906 #18 NEW cov: 12178 ft: 15047 corp: 17/37b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:06:54.906 [2024-07-25 11:53:32.197195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:32.197220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.906 [2024-07-25 11:53:32.197279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.906 [2024-07-25 11:53:32.197293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.164 #19 NEW cov: 12178 ft: 15091 corp: 18/39b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:55.164 [2024-07-25 11:53:32.247488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-07-25 11:53:32.247514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.164 [2024-07-25 11:53:32.247576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-07-25 11:53:32.247590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.164 [2024-07-25 11:53:32.247646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-07-25 11:53:32.247660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.423 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:55.423 #20 NEW cov: 12201 ft: 15280 corp: 19/42b lim: 5 exec/s: 20 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:55.423 [2024-07-25 11:53:32.588444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.423 [2024-07-25 11:53:32.588508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.423 [2024-07-25 11:53:32.588603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.423 [2024-07-25 11:53:32.588629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.423 #21 NEW cov: 12201 ft: 15400 corp: 20/44b lim: 5 exec/s: 21 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:06:55.423 [2024-07-25 11:53:32.648263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.423 [2024-07-25 11:53:32.648291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.423 [2024-07-25 11:53:32.648349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.423 [2024-07-25 11:53:32.648363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.423 #22 NEW cov: 12201 ft: 15428 corp: 21/46b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:06:55.423 [2024-07-25 11:53:32.698402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.423 [2024-07-25 11:53:32.698428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.423 [2024-07-25 11:53:32.698483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.423 [2024-07-25 11:53:32.698497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.423 #23 NEW cov: 12201 ft: 15443 corp: 22/48b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:55.681 [2024-07-25 11:53:32.738524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.681 [2024-07-25 11:53:32.738550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.738606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.738620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.682 #24 NEW cov: 12201 ft: 15450 corp: 23/50b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:55.682 [2024-07-25 11:53:32.789119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.789145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.789218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.789232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.789288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.789302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.789358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.789372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.789429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.789443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.682 #25 NEW cov: 12201 ft: 15457 corp: 24/55b lim: 5 exec/s: 25 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:55.682 [2024-07-25 11:53:32.838960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.838985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.839058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.839072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.839126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.839140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.682 #26 NEW cov: 12201 ft: 15462 corp: 25/58b lim: 5 exec/s: 26 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:55.682 [2024-07-25 11:53:32.878928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.878953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.879024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.879039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.682 #27 NEW cov: 12201 ft: 15476 corp: 26/60b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:55.682 [2024-07-25 11:53:32.919173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.919198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.919273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.919287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.919344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.919357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.682 #28 NEW cov: 12201 ft: 15515 corp: 27/63b lim: 5 exec/s: 28 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:55.682 [2024-07-25 11:53:32.969171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.969196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.682 [2024-07-25 11:53:32.969252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.682 [2024-07-25 11:53:32.969266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.941 #29 NEW cov: 12201 ft: 15526 corp: 28/65b lim: 5 exec/s: 29 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:55.941 [2024-07-25 11:53:33.019121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.941 [2024-07-25 11:53:33.019148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.941 #30 NEW cov: 12201 ft: 15545 corp: 29/66b lim: 5 exec/s: 30 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:55.941 [2024-07-25 11:53:33.069450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.941 [2024-07-25 11:53:33.069476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.941 [2024-07-25 11:53:33.069532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.941 [2024-07-25 11:53:33.069545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.941 #31 NEW cov: 12201 ft: 15552 corp: 30/68b lim: 5 exec/s: 31 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:06:55.941 [2024-07-25 11:53:33.119581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.941 [2024-07-25 11:53:33.119608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.941 [2024-07-25 11:53:33.119665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.941 [2024-07-25 11:53:33.119680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.941 #32 NEW cov: 12201 ft: 15567 corp: 31/70b lim: 5 exec/s: 32 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:55.941 [2024-07-25 11:53:33.159566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.941 [2024-07-25 11:53:33.159593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.941 #33 NEW cov: 12201 ft: 15608 corp: 32/71b lim: 5 exec/s: 33 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:06:55.941 [2024-07-25 11:53:33.209691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.941 [2024-07-25 11:53:33.209718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.199 #34 NEW cov: 12201 ft: 15657 corp: 33/72b lim: 5 exec/s: 34 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:06:56.199 [2024-07-25 11:53:33.259974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.200 [2024-07-25 11:53:33.260000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.200 [2024-07-25 11:53:33.260073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.200 [2024-07-25 11:53:33.260087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.200 #35 NEW cov: 12201 ft: 15664 corp: 34/74b lim: 5 exec/s: 35 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:06:56.200 [2024-07-25 11:53:33.310091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.200 [2024-07-25 11:53:33.310117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.200 [2024-07-25 11:53:33.310173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.200 [2024-07-25 11:53:33.310187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.200 #36 NEW cov: 12201 ft: 15671 corp: 35/76b lim: 5 exec/s: 36 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:56.200 [2024-07-25 11:53:33.350224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.200 [2024-07-25 11:53:33.350249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.200 [2024-07-25 11:53:33.350323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.200 [2024-07-25 11:53:33.350338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.200 #37 NEW cov: 12201 ft: 15677 corp: 36/78b lim: 5 exec/s: 37 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:56.200 [2024-07-25 11:53:33.390178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.200 [2024-07-25 11:53:33.390204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.200 #38 NEW cov: 12201 ft: 15688 corp: 37/79b lim: 5 exec/s: 19 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:06:56.200 #38 DONE cov: 12201 ft: 15688 corp: 37/79b lim: 5 exec/s: 19 rss: 74Mb 00:06:56.200 Done 38 runs in 2 second(s) 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.459 11:53:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:56.459 [2024-07-25 11:53:33.595193] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:56.459 [2024-07-25 11:53:33.595280] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905771 ] 00:06:56.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.718 [2024-07-25 11:53:33.913694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.718 [2024-07-25 11:53:34.000308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.977 [2024-07-25 11:53:34.060317] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.977 [2024-07-25 11:53:34.076631] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:56.977 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.977 INFO: Seed: 4233196114 00:06:56.977 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:56.977 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:56.977 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:56.977 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.977 #2 INITED exec/s: 0 rss: 64Mb 00:06:56.977 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.977 This may also happen if the target rejected all inputs we tried so far 00:06:56.977 [2024-07-25 11:53:34.131432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.977 [2024-07-25 11:53:34.131468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.977 [2024-07-25 11:53:34.131520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.977 [2024-07-25 11:53:34.131537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.977 [2024-07-25 11:53:34.131568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.977 [2024-07-25 11:53:34.131589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.236 NEW_FUNC[1/700]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:57.236 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.236 #12 NEW cov: 11998 ft: 11997 corp: 2/26b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 5 ShuffleBytes-CrossOver-CMP-ChangeBinInt-InsertRepeatedBytes- DE: "\001\\"- 00:06:57.236 [2024-07-25 11:53:34.504771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.237 [2024-07-25 11:53:34.504812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.237 [2024-07-25 11:53:34.504907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.237 [2024-07-25 11:53:34.504923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.237 [2024-07-25 11:53:34.505011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.237 [2024-07-25 11:53:34.505026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.496 #13 NEW cov: 12111 ft: 12712 corp: 3/51b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 ChangeBinInt- 00:06:57.496 [2024-07-25 11:53:34.575250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.575280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.496 [2024-07-25 11:53:34.575383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.575400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.496 [2024-07-25 11:53:34.575491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.575507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.496 [2024-07-25 11:53:34.575599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.575614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.496 #14 NEW cov: 12117 ft: 13370 corp: 4/89b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CrossOver- 00:06:57.496 [2024-07-25 11:53:34.645152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.645180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.496 [2024-07-25 11:53:34.645284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.645300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.496 [2024-07-25 11:53:34.645416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.645436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.496 #15 NEW cov: 12202 ft: 13654 corp: 5/114b lim: 40 exec/s: 0 rss: 72Mb L: 25/38 MS: 1 ShuffleBytes- 00:06:57.496 [2024-07-25 11:53:34.694756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.694783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.496 #18 NEW cov: 12202 ft: 14058 corp: 6/123b lim: 40 exec/s: 0 rss: 72Mb L: 9/38 MS: 3 ChangeByte-ChangeBinInt-CrossOver- 00:06:57.496 [2024-07-25 11:53:34.744900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.496 [2024-07-25 11:53:34.744936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.496 #19 NEW cov: 12202 ft: 14150 corp: 7/133b lim: 40 exec/s: 0 rss: 72Mb L: 10/38 MS: 1 InsertByte- 00:06:57.756 [2024-07-25 11:53:34.815738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.815772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.815862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.815878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.815969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.815984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.756 #20 NEW cov: 12202 ft: 14249 corp: 8/161b lim: 40 exec/s: 0 rss: 72Mb L: 28/38 MS: 1 CopyPart- 00:06:57.756 [2024-07-25 11:53:34.866215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.866243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.866358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.866375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.866475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.866491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.866583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.866598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.756 #21 NEW cov: 12202 ft: 14260 corp: 9/194b lim: 40 exec/s: 0 rss: 72Mb L: 33/38 MS: 1 CrossOver- 00:06:57.756 [2024-07-25 11:53:34.916384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.916414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.916524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00050000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.916539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.916632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.916647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.916740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.916755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.756 #22 NEW cov: 12202 ft: 14367 corp: 10/227b lim: 40 exec/s: 0 rss: 72Mb L: 33/38 MS: 1 ChangeBinInt- 00:06:57.756 [2024-07-25 11:53:34.985776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.985809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:34.985939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:34.985962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.756 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:57.756 #23 NEW cov: 12225 ft: 14806 corp: 11/249b lim: 40 exec/s: 0 rss: 72Mb L: 22/38 MS: 1 CrossOver- 00:06:57.756 [2024-07-25 11:53:35.056357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:35.056432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.756 [2024-07-25 11:53:35.056579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.756 [2024-07-25 11:53:35.056621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.015 #24 NEW cov: 12225 ft: 14836 corp: 12/265b lim: 40 exec/s: 0 rss: 72Mb L: 16/38 MS: 1 EraseBytes- 00:06:58.015 [2024-07-25 11:53:35.137003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.015 [2024-07-25 11:53:35.137032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.015 [2024-07-25 11:53:35.137132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0dcfd9a2 cdw11:119cffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.015 [2024-07-25 11:53:35.137154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.015 [2024-07-25 11:53:35.137263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.015 [2024-07-25 11:53:35.137282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.015 [2024-07-25 11:53:35.137381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.015 [2024-07-25 11:53:35.137399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.015 #25 NEW cov: 12225 ft: 14891 corp: 13/303b lim: 40 exec/s: 25 rss: 72Mb L: 38/38 MS: 1 CMP- DE: "\000\032\015\317\331\242\021\234"- 00:06:58.015 [2024-07-25 11:53:35.206339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00002a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.015 [2024-07-25 11:53:35.206368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.015 #27 NEW cov: 12225 ft: 14933 corp: 14/311b lim: 40 exec/s: 27 rss: 72Mb L: 8/38 MS: 2 CrossOver-InsertByte- 00:06:58.015 [2024-07-25 11:53:35.257077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00200000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.015 [2024-07-25 11:53:35.257104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.016 [2024-07-25 11:53:35.257188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.016 [2024-07-25 11:53:35.257204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.016 [2024-07-25 11:53:35.257300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.016 [2024-07-25 11:53:35.257314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.016 #28 NEW cov: 12225 ft: 14942 corp: 15/336b lim: 40 exec/s: 28 rss: 72Mb L: 25/38 MS: 1 ChangeBit- 00:06:58.016 [2024-07-25 11:53:35.307510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.016 [2024-07-25 11:53:35.307536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.016 [2024-07-25 11:53:35.307620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffe3ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.016 [2024-07-25 11:53:35.307634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.016 [2024-07-25 11:53:35.307728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.016 [2024-07-25 11:53:35.307745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.016 [2024-07-25 11:53:35.307842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.016 [2024-07-25 11:53:35.307856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.337 #29 NEW cov: 12225 ft: 14996 corp: 16/374b lim: 40 exec/s: 29 rss: 72Mb L: 38/38 MS: 1 ChangeByte- 00:06:58.337 [2024-07-25 11:53:35.357837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.357863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.337 [2024-07-25 11:53:35.357955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.357974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.337 [2024-07-25 11:53:35.358065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000011a cdw11:0dd0548a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.358079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.337 [2024-07-25 11:53:35.358170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:71f40000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.358185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.337 #30 NEW cov: 12225 ft: 15012 corp: 17/407b lim: 40 exec/s: 30 rss: 72Mb L: 33/38 MS: 1 CMP- DE: "\001\032\015\320T\212q\364"- 00:06:58.337 [2024-07-25 11:53:35.407902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.407927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.337 [2024-07-25 11:53:35.408023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.408045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.337 [2024-07-25 11:53:35.408144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.408163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.337 [2024-07-25 11:53:35.408261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.408278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.337 #31 NEW cov: 12225 ft: 15022 corp: 18/442b lim: 40 exec/s: 31 rss: 72Mb L: 35/38 MS: 1 CopyPart- 00:06:58.337 [2024-07-25 11:53:35.457265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.457290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.337 #32 NEW cov: 12225 ft: 15032 corp: 19/451b lim: 40 exec/s: 32 rss: 72Mb L: 9/38 MS: 1 CrossOver- 00:06:58.337 [2024-07-25 11:53:35.507393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.507418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.337 #35 NEW cov: 12225 ft: 15064 corp: 20/465b lim: 40 exec/s: 35 rss: 72Mb L: 14/38 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:06:58.337 [2024-07-25 11:53:35.557766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.557791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.337 [2024-07-25 11:53:35.557883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffe3ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.557898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.337 #36 NEW cov: 12225 ft: 15082 corp: 21/487b lim: 40 exec/s: 36 rss: 72Mb L: 22/38 MS: 1 EraseBytes- 00:06:58.337 [2024-07-25 11:53:35.618742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0400f800 cdw11:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.337 [2024-07-25 11:53:35.618766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.338 [2024-07-25 11:53:35.618854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0dcfd9a2 cdw11:119cffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.338 [2024-07-25 11:53:35.618870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.338 [2024-07-25 11:53:35.618956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.338 [2024-07-25 11:53:35.618969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.338 [2024-07-25 11:53:35.619054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.338 [2024-07-25 11:53:35.619069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.597 #37 NEW cov: 12225 ft: 15088 corp: 22/525b lim: 40 exec/s: 37 rss: 72Mb L: 38/38 MS: 1 ChangeBinInt- 00:06:58.597 [2024-07-25 11:53:35.678249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.678275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.678369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffff5f5 cdw11:f5f5f5f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.678390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.597 #38 NEW cov: 12225 ft: 15093 corp: 23/543b lim: 40 exec/s: 38 rss: 73Mb L: 18/38 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:58.597 [2024-07-25 11:53:35.738728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.738757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.738849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.738864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.738959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:bfbfbfbf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.738974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.597 #39 NEW cov: 12225 ft: 15110 corp: 24/573b lim: 40 exec/s: 39 rss: 73Mb L: 30/38 MS: 1 InsertRepeatedBytes- 00:06:58.597 [2024-07-25 11:53:35.789239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.789265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.789350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00050000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.789368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.789458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.789472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.789562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.789577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.597 #40 NEW cov: 12225 ft: 15120 corp: 25/611b lim: 40 exec/s: 40 rss: 73Mb L: 38/38 MS: 1 CrossOver- 00:06:58.597 [2024-07-25 11:53:35.849042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.849068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.849168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.849183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.597 [2024-07-25 11:53:35.849277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.849293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.597 #41 NEW cov: 12225 ft: 15142 corp: 26/635b lim: 40 exec/s: 41 rss: 73Mb L: 24/38 MS: 1 EraseBytes- 00:06:58.597 [2024-07-25 11:53:35.898975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.597 [2024-07-25 11:53:35.899002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.598 [2024-07-25 11:53:35.899086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00002800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.598 [2024-07-25 11:53:35.899102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.857 #42 NEW cov: 12225 ft: 15185 corp: 27/651b lim: 40 exec/s: 42 rss: 73Mb L: 16/38 MS: 1 CrossOver- 00:06:58.857 [2024-07-25 11:53:35.968942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.857 [2024-07-25 11:53:35.968971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.857 #44 NEW cov: 12225 ft: 15245 corp: 28/659b lim: 40 exec/s: 44 rss: 73Mb L: 8/38 MS: 2 InsertRepeatedBytes-InsertByte- 00:06:58.857 [2024-07-25 11:53:36.019655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00200000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.857 [2024-07-25 11:53:36.019682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.857 [2024-07-25 11:53:36.019783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.857 [2024-07-25 11:53:36.019799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.857 [2024-07-25 11:53:36.019889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.857 [2024-07-25 11:53:36.019905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.857 #45 NEW cov: 12225 ft: 15299 corp: 29/684b lim: 40 exec/s: 45 rss: 73Mb L: 25/38 MS: 1 ChangeBinInt- 00:06:58.857 [2024-07-25 11:53:36.089596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.857 [2024-07-25 11:53:36.089625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.857 [2024-07-25 11:53:36.089723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.857 [2024-07-25 11:53:36.089744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.857 #46 NEW cov: 12225 ft: 15309 corp: 30/706b lim: 40 exec/s: 23 rss: 73Mb L: 22/38 MS: 1 ShuffleBytes- 00:06:58.857 #46 DONE cov: 12225 ft: 15309 corp: 30/706b lim: 40 exec/s: 23 rss: 73Mb 00:06:58.857 ###### Recommended dictionary. ###### 00:06:58.857 "\001\\" # Uses: 0 00:06:58.857 "\000\032\015\317\331\242\021\234" # Uses: 0 00:06:58.857 "\001\032\015\320T\212q\364" # Uses: 0 00:06:58.857 "\377\377\377\377" # Uses: 0 00:06:58.857 ###### End of recommended dictionary. ###### 00:06:58.857 Done 46 runs in 2 second(s) 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.117 11:53:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:59.117 [2024-07-25 11:53:36.299633] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:59.117 [2024-07-25 11:53:36.299716] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906151 ] 00:06:59.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.376 [2024-07-25 11:53:36.494128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.376 [2024-07-25 11:53:36.563348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.376 [2024-07-25 11:53:36.622730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.376 [2024-07-25 11:53:36.639048] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:59.376 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.376 INFO: Seed: 2501250241 00:06:59.376 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:06:59.376 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:06:59.376 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:59.376 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.376 #2 INITED exec/s: 0 rss: 65Mb 00:06:59.376 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.376 This may also happen if the target rejected all inputs we tried so far 00:06:59.635 [2024-07-25 11:53:36.693849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.635 [2024-07-25 11:53:36.693887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.635 [2024-07-25 11:53:36.693923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.635 [2024-07-25 11:53:36.693938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.895 NEW_FUNC[1/701]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:59.895 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.895 #19 NEW cov: 12005 ft: 12006 corp: 2/18b lim: 40 exec/s: 0 rss: 72Mb L: 17/17 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:59.895 [2024-07-25 11:53:37.064750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff0500ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.895 [2024-07-25 11:53:37.064796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.895 [2024-07-25 11:53:37.064849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.895 [2024-07-25 11:53:37.064866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.895 #20 NEW cov: 12123 ft: 12597 corp: 3/35b lim: 40 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 ChangeBinInt- 00:06:59.895 [2024-07-25 11:53:37.154901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff05ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.895 [2024-07-25 11:53:37.154935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.895 [2024-07-25 11:53:37.154985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.895 [2024-07-25 11:53:37.155001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.895 [2024-07-25 11:53:37.155032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.895 [2024-07-25 11:53:37.155052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.154 #21 NEW cov: 12129 ft: 13074 corp: 4/60b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 CopyPart- 00:07:00.154 [2024-07-25 11:53:37.234985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a182744 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.154 [2024-07-25 11:53:37.235020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.154 #35 NEW cov: 12214 ft: 14098 corp: 5/71b lim: 40 exec/s: 0 rss: 72Mb L: 11/25 MS: 4 InsertByte-CrossOver-InsertByte-InsertRepeatedBytes- 00:07:00.154 [2024-07-25 11:53:37.305212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff0500ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.154 [2024-07-25 11:53:37.305246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.154 [2024-07-25 11:53:37.305281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.154 [2024-07-25 11:53:37.305298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.155 #36 NEW cov: 12214 ft: 14207 corp: 6/88b lim: 40 exec/s: 0 rss: 72Mb L: 17/25 MS: 1 ShuffleBytes- 00:07:00.155 [2024-07-25 11:53:37.365323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a182744 cdw11:ffff0a44 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.155 [2024-07-25 11:53:37.365358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.155 #37 NEW cov: 12214 ft: 14356 corp: 7/99b lim: 40 exec/s: 0 rss: 72Mb L: 11/25 MS: 1 CrossOver- 00:07:00.155 [2024-07-25 11:53:37.455662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff0500ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.155 [2024-07-25 11:53:37.455697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.155 [2024-07-25 11:53:37.455734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.155 [2024-07-25 11:53:37.455757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.414 #38 NEW cov: 12214 ft: 14417 corp: 8/116b lim: 40 exec/s: 0 rss: 72Mb L: 17/25 MS: 1 ShuffleBytes- 00:07:00.414 [2024-07-25 11:53:37.515875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a182744 cdw11:4444d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.515909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.515945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.515961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.515992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.516008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.516038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.516053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.414 #39 NEW cov: 12214 ft: 14746 corp: 9/150b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:00.414 [2024-07-25 11:53:37.575894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2cff0500 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.575929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.575964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.575980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.414 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:00.414 #40 NEW cov: 12231 ft: 14782 corp: 10/168b lim: 40 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 InsertByte- 00:07:00.414 [2024-07-25 11:53:37.625987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff0500ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.626018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.626068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.626085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.414 #46 NEW cov: 12231 ft: 14853 corp: 11/184b lim: 40 exec/s: 46 rss: 73Mb L: 16/34 MS: 1 EraseBytes- 00:07:00.414 [2024-07-25 11:53:37.706298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.706329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.706378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.706394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.706425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.706440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.414 [2024-07-25 11:53:37.706470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.414 [2024-07-25 11:53:37.706485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.673 #50 NEW cov: 12231 ft: 14877 corp: 12/222b lim: 40 exec/s: 50 rss: 73Mb L: 38/38 MS: 4 InsertByte-CrossOver-EraseBytes-InsertRepeatedBytes- 00:07:00.673 [2024-07-25 11:53:37.766344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a182744 cdw11:ffff0a44 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.673 [2024-07-25 11:53:37.766376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.673 #51 NEW cov: 12231 ft: 14904 corp: 13/231b lim: 40 exec/s: 51 rss: 73Mb L: 9/38 MS: 1 EraseBytes- 00:07:00.673 [2024-07-25 11:53:37.846561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff0500ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.673 [2024-07-25 11:53:37.846591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.673 [2024-07-25 11:53:37.846644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0500ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.673 [2024-07-25 11:53:37.846660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.673 #52 NEW cov: 12231 ft: 14961 corp: 14/251b lim: 40 exec/s: 52 rss: 73Mb L: 20/38 MS: 1 CrossOver- 00:07:00.673 [2024-07-25 11:53:37.896813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a182744 cdw11:4444d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.673 [2024-07-25 11:53:37.896843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.673 [2024-07-25 11:53:37.896892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.674 [2024-07-25 11:53:37.896908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.674 [2024-07-25 11:53:37.896939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d67a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.674 [2024-07-25 11:53:37.896954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.674 [2024-07-25 11:53:37.896984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6d64444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.674 [2024-07-25 11:53:37.897000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.674 #53 NEW cov: 12231 ft: 14979 corp: 15/286b lim: 40 exec/s: 53 rss: 73Mb L: 35/38 MS: 1 InsertByte- 00:07:00.932 [2024-07-25 11:53:37.977034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff0500ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:37.977066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.932 [2024-07-25 11:53:37.977102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:37.977118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.932 [2024-07-25 11:53:37.977149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:37.977166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.932 #54 NEW cov: 12231 ft: 15016 corp: 16/311b lim: 40 exec/s: 54 rss: 73Mb L: 25/38 MS: 1 CrossOver- 00:07:00.932 [2024-07-25 11:53:38.026980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1a182744 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.027010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.932 #60 NEW cov: 12231 ft: 15052 corp: 17/322b lim: 40 exec/s: 60 rss: 73Mb L: 11/38 MS: 1 ChangeBit- 00:07:00.932 [2024-07-25 11:53:38.077295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a182744 cdw11:4444d6d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.077325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.932 [2024-07-25 11:53:38.077374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.077397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.932 [2024-07-25 11:53:38.077427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.077442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.932 [2024-07-25 11:53:38.077472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.077487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.932 #61 NEW cov: 12231 ft: 15094 corp: 18/356b lim: 40 exec/s: 61 rss: 73Mb L: 34/38 MS: 1 ChangeBit- 00:07:00.932 [2024-07-25 11:53:38.127226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.127258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.932 #62 NEW cov: 12231 ft: 15132 corp: 19/366b lim: 40 exec/s: 62 rss: 73Mb L: 10/38 MS: 1 EraseBytes- 00:07:00.932 [2024-07-25 11:53:38.207496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff050005 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.207527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.932 [2024-07-25 11:53:38.207576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.932 [2024-07-25 11:53:38.207592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.191 #63 NEW cov: 12231 ft: 15138 corp: 20/382b lim: 40 exec/s: 63 rss: 73Mb L: 16/38 MS: 1 ChangeBinInt- 00:07:01.191 [2024-07-25 11:53:38.287765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7fffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.191 [2024-07-25 11:53:38.287795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.191 #69 NEW cov: 12231 ft: 15183 corp: 21/392b lim: 40 exec/s: 69 rss: 73Mb L: 10/38 MS: 1 ChangeBit- 00:07:01.191 [2024-07-25 11:53:38.367967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1aebd8bb cdw11:bbbbbbbb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.191 [2024-07-25 11:53:38.367999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.191 #70 NEW cov: 12231 ft: 15208 corp: 22/403b lim: 40 exec/s: 70 rss: 73Mb L: 11/38 MS: 1 ChangeBinInt- 00:07:01.191 [2024-07-25 11:53:38.448323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff050000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.191 [2024-07-25 11:53:38.448355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.191 [2024-07-25 11:53:38.448403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.191 [2024-07-25 11:53:38.448419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.191 [2024-07-25 11:53:38.448449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.191 [2024-07-25 11:53:38.448465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.191 [2024-07-25 11:53:38.448499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.191 [2024-07-25 11:53:38.448515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.450 #71 NEW cov: 12231 ft: 15219 corp: 23/437b lim: 40 exec/s: 71 rss: 74Mb L: 34/38 MS: 1 InsertRepeatedBytes- 00:07:01.451 [2024-07-25 11:53:38.528404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2cff0500 cdw11:ffff0fff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.451 [2024-07-25 11:53:38.528435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.451 [2024-07-25 11:53:38.528483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.451 [2024-07-25 11:53:38.528499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.451 #72 NEW cov: 12238 ft: 15240 corp: 24/457b lim: 40 exec/s: 72 rss: 74Mb L: 20/38 MS: 1 CMP- DE: "\377\017"- 00:07:01.451 [2024-07-25 11:53:38.608726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff05ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.451 [2024-07-25 11:53:38.608764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.451 [2024-07-25 11:53:38.608814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.451 [2024-07-25 11:53:38.608830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.451 [2024-07-25 11:53:38.608860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.451 [2024-07-25 11:53:38.608876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.451 [2024-07-25 11:53:38.608906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.451 [2024-07-25 11:53:38.608921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.451 #73 NEW cov: 12238 ft: 15258 corp: 25/495b lim: 40 exec/s: 36 rss: 74Mb L: 38/38 MS: 1 CopyPart- 00:07:01.451 #73 DONE cov: 12238 ft: 15258 corp: 25/495b lim: 40 exec/s: 36 rss: 74Mb 00:07:01.451 ###### Recommended dictionary. ###### 00:07:01.451 "\377\017" # Uses: 0 00:07:01.451 ###### End of recommended dictionary. ###### 00:07:01.451 Done 73 runs in 2 second(s) 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.710 11:53:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:01.710 [2024-07-25 11:53:38.842406] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:01.710 [2024-07-25 11:53:38.842478] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906434 ] 00:07:01.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.969 [2024-07-25 11:53:39.048746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.969 [2024-07-25 11:53:39.118507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.969 [2024-07-25 11:53:39.177896] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.969 [2024-07-25 11:53:39.194199] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:01.969 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.969 INFO: Seed: 760283177 00:07:01.969 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:01.969 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:01.969 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:01.969 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.969 #2 INITED exec/s: 0 rss: 65Mb 00:07:01.969 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.969 This may also happen if the target rejected all inputs we tried so far 00:07:01.969 [2024-07-25 11:53:39.249042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.969 [2024-07-25 11:53:39.249078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.969 [2024-07-25 11:53:39.249128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.969 [2024-07-25 11:53:39.249145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.969 [2024-07-25 11:53:39.249175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.969 [2024-07-25 11:53:39.249191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.488 NEW_FUNC[1/701]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:02.488 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.488 #10 NEW cov: 12008 ft: 12005 corp: 2/25b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:02.488 [2024-07-25 11:53:39.620001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.488 [2024-07-25 11:53:39.620049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.488 [2024-07-25 11:53:39.620087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.488 [2024-07-25 11:53:39.620104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.488 [2024-07-25 11:53:39.620135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.488 [2024-07-25 11:53:39.620151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.488 #16 NEW cov: 12121 ft: 12623 corp: 3/53b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:02.488 [2024-07-25 11:53:39.709898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.488 [2024-07-25 11:53:39.709933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.488 #17 NEW cov: 12127 ft: 13576 corp: 4/61b lim: 40 exec/s: 0 rss: 72Mb L: 8/28 MS: 1 InsertRepeatedBytes- 00:07:02.488 [2024-07-25 11:53:39.770177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.488 [2024-07-25 11:53:39.770208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.488 [2024-07-25 11:53:39.770257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.488 [2024-07-25 11:53:39.770273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.488 [2024-07-25 11:53:39.770303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.488 [2024-07-25 11:53:39.770319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.747 #18 NEW cov: 12212 ft: 13838 corp: 5/89b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeBit- 00:07:02.747 [2024-07-25 11:53:39.850399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.850431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.850465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.850481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.850511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.850527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.747 #19 NEW cov: 12212 ft: 13979 corp: 6/114b lim: 40 exec/s: 0 rss: 72Mb L: 25/28 MS: 1 CrossOver- 00:07:02.747 [2024-07-25 11:53:39.900483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:0000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.900516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.900565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.900581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.900611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.900626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.747 #20 NEW cov: 12212 ft: 14040 corp: 7/142b lim: 40 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeByte- 00:07:02.747 [2024-07-25 11:53:39.950747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.950776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.950825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.950842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.950871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000005c cdw11:5c5c5c5c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.950887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.950916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:5c5c5c5c cdw11:5c5c5c5c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.950931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:39.950960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:5c5c5c00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:39.950975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.747 #21 NEW cov: 12212 ft: 14540 corp: 8/182b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:02.747 [2024-07-25 11:53:40.010912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:40.010950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:40.010990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:40.011009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.747 [2024-07-25 11:53:40.011044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.747 [2024-07-25 11:53:40.011062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.747 #22 NEW cov: 12212 ft: 14555 corp: 9/210b lim: 40 exec/s: 0 rss: 72Mb L: 28/40 MS: 1 ShuffleBytes- 00:07:03.006 [2024-07-25 11:53:40.060999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.006 [2024-07-25 11:53:40.061042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.006 [2024-07-25 11:53:40.061078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.006 [2024-07-25 11:53:40.061095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.006 [2024-07-25 11:53:40.061125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.006 [2024-07-25 11:53:40.061141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.006 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:03.006 #23 NEW cov: 12235 ft: 14619 corp: 10/238b lim: 40 exec/s: 0 rss: 72Mb L: 28/40 MS: 1 ChangeBinInt- 00:07:03.006 [2024-07-25 11:53:40.151178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02021c00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.006 [2024-07-25 11:53:40.151214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.006 [2024-07-25 11:53:40.151249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.007 [2024-07-25 11:53:40.151266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.007 [2024-07-25 11:53:40.151295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.007 [2024-07-25 11:53:40.151311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.007 #24 NEW cov: 12235 ft: 14667 corp: 11/266b lim: 40 exec/s: 0 rss: 72Mb L: 28/40 MS: 1 ChangeBinInt- 00:07:03.007 [2024-07-25 11:53:40.231350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.007 [2024-07-25 11:53:40.231386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.007 [2024-07-25 11:53:40.231422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.007 [2024-07-25 11:53:40.231439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.007 [2024-07-25 11:53:40.231469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.007 [2024-07-25 11:53:40.231484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.007 #25 NEW cov: 12235 ft: 14726 corp: 12/291b lim: 40 exec/s: 25 rss: 72Mb L: 25/40 MS: 1 ChangeBinInt- 00:07:03.266 [2024-07-25 11:53:40.311570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.311603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.311639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.311655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.311691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.311708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.266 #26 NEW cov: 12235 ft: 14776 corp: 13/319b lim: 40 exec/s: 26 rss: 72Mb L: 28/40 MS: 1 CopyPart- 00:07:03.266 [2024-07-25 11:53:40.371702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.371734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.371776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.371793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.371839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.371855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.266 #27 NEW cov: 12235 ft: 14791 corp: 14/344b lim: 40 exec/s: 27 rss: 72Mb L: 25/40 MS: 1 ChangeBinInt- 00:07:03.266 [2024-07-25 11:53:40.431890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.431920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.431968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.431985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.432014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.432029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.266 #28 NEW cov: 12235 ft: 14806 corp: 15/372b lim: 40 exec/s: 28 rss: 72Mb L: 28/40 MS: 1 ChangeByte- 00:07:03.266 [2024-07-25 11:53:40.481983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.482014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.482048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.482065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.482094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.482110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.266 #29 NEW cov: 12235 ft: 14846 corp: 16/403b lim: 40 exec/s: 29 rss: 72Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:07:03.266 [2024-07-25 11:53:40.562231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.562268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.562303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.562320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.266 [2024-07-25 11:53:40.562350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.266 [2024-07-25 11:53:40.562366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.525 #30 NEW cov: 12235 ft: 14853 corp: 17/428b lim: 40 exec/s: 30 rss: 72Mb L: 25/40 MS: 1 ChangeBinInt- 00:07:03.525 [2024-07-25 11:53:40.612339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.612372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.525 [2024-07-25 11:53:40.612406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.612422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.525 [2024-07-25 11:53:40.612452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.612468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.525 #31 NEW cov: 12235 ft: 14889 corp: 18/452b lim: 40 exec/s: 31 rss: 72Mb L: 24/40 MS: 1 EraseBytes- 00:07:03.525 [2024-07-25 11:53:40.662466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.662498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.525 [2024-07-25 11:53:40.662532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.662549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.525 [2024-07-25 11:53:40.662578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.662594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.525 #32 NEW cov: 12235 ft: 14900 corp: 19/476b lim: 40 exec/s: 32 rss: 72Mb L: 24/40 MS: 1 CrossOver- 00:07:03.525 [2024-07-25 11:53:40.712510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.712541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.525 [2024-07-25 11:53:40.712576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.712592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.525 #33 NEW cov: 12235 ft: 15099 corp: 20/494b lim: 40 exec/s: 33 rss: 72Mb L: 18/40 MS: 1 EraseBytes- 00:07:03.525 [2024-07-25 11:53:40.762724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.525 [2024-07-25 11:53:40.762766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.526 [2024-07-25 11:53:40.762802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.526 [2024-07-25 11:53:40.762818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.526 [2024-07-25 11:53:40.762847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00210000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.526 [2024-07-25 11:53:40.762862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.785 #34 NEW cov: 12235 ft: 15122 corp: 21/520b lim: 40 exec/s: 34 rss: 73Mb L: 26/40 MS: 1 InsertByte- 00:07:03.785 [2024-07-25 11:53:40.842796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:003a0000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.785 [2024-07-25 11:53:40.842827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.785 #35 NEW cov: 12235 ft: 15137 corp: 22/528b lim: 40 exec/s: 35 rss: 73Mb L: 8/40 MS: 1 ChangeByte- 00:07:03.785 [2024-07-25 11:53:40.933094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.785 [2024-07-25 11:53:40.933125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.785 [2024-07-25 11:53:40.933159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.785 [2024-07-25 11:53:40.933175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.785 #36 NEW cov: 12235 ft: 15152 corp: 23/547b lim: 40 exec/s: 36 rss: 73Mb L: 19/40 MS: 1 EraseBytes- 00:07:03.785 [2024-07-25 11:53:41.013369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.785 [2024-07-25 11:53:41.013402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.785 [2024-07-25 11:53:41.013438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.785 [2024-07-25 11:53:41.013454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.785 [2024-07-25 11:53:41.013484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.785 [2024-07-25 11:53:41.013499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.785 #37 NEW cov: 12235 ft: 15162 corp: 24/575b lim: 40 exec/s: 37 rss: 73Mb L: 28/40 MS: 1 ChangeByte- 00:07:04.045 [2024-07-25 11:53:41.093550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.045 [2024-07-25 11:53:41.093582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.045 [2024-07-25 11:53:41.093616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.045 [2024-07-25 11:53:41.093632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.045 [2024-07-25 11:53:41.093667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.045 [2024-07-25 11:53:41.093682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.045 #38 NEW cov: 12235 ft: 15280 corp: 25/603b lim: 40 exec/s: 38 rss: 73Mb L: 28/40 MS: 1 CrossOver- 00:07:04.045 [2024-07-25 11:53:41.173802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.045 [2024-07-25 11:53:41.173832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.045 [2024-07-25 11:53:41.173865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.045 [2024-07-25 11:53:41.173881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.045 [2024-07-25 11:53:41.173910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.045 [2024-07-25 11:53:41.173924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.045 [2024-07-25 11:53:41.173952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.045 [2024-07-25 11:53:41.173967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.045 #39 NEW cov: 12235 ft: 15297 corp: 26/636b lim: 40 exec/s: 19 rss: 73Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:07:04.045 #39 DONE cov: 12235 ft: 15297 corp: 26/636b lim: 40 exec/s: 19 rss: 73Mb 00:07:04.045 Done 39 runs in 2 second(s) 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.305 11:53:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:04.305 [2024-07-25 11:53:41.414230] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:04.305 [2024-07-25 11:53:41.414304] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906725 ] 00:07:04.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.565 [2024-07-25 11:53:41.627722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.565 [2024-07-25 11:53:41.698919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.565 [2024-07-25 11:53:41.758316] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.565 [2024-07-25 11:53:41.774626] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:04.565 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.565 INFO: Seed: 3340268039 00:07:04.565 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:04.565 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:04.565 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:04.565 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.565 #2 INITED exec/s: 0 rss: 64Mb 00:07:04.565 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.565 This may also happen if the target rejected all inputs we tried so far 00:07:04.565 [2024-07-25 11:53:41.819402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0efcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.565 [2024-07-25 11:53:41.819437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.565 [2024-07-25 11:53:41.819472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.565 [2024-07-25 11:53:41.819488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.565 [2024-07-25 11:53:41.819517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.565 [2024-07-25 11:53:41.819532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.084 NEW_FUNC[1/699]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:05.084 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:05.084 #31 NEW cov: 11994 ft: 11993 corp: 2/30b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 4 CopyPart-CopyPart-ChangeBit-InsertRepeatedBytes- 00:07:05.084 [2024-07-25 11:53:42.190997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.191035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.084 [2024-07-25 11:53:42.191092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.191106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.084 [2024-07-25 11:53:42.191161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.191178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.084 NEW_FUNC[1/1]: 0x17b8190 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1528 00:07:05.084 #32 NEW cov: 12109 ft: 12557 corp: 3/59b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 ChangeBinInt- 00:07:05.084 [2024-07-25 11:53:42.250971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.250999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.084 [2024-07-25 11:53:42.251070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.251084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.084 #33 NEW cov: 12115 ft: 13135 corp: 4/81b lim: 40 exec/s: 0 rss: 72Mb L: 22/29 MS: 1 EraseBytes- 00:07:05.084 [2024-07-25 11:53:42.301100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.301126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.084 [2024-07-25 11:53:42.301183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.301197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.084 #34 NEW cov: 12200 ft: 13396 corp: 5/103b lim: 40 exec/s: 0 rss: 72Mb L: 22/29 MS: 1 ShuffleBytes- 00:07:05.084 [2024-07-25 11:53:42.351359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:46464646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.351384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.084 [2024-07-25 11:53:42.351438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:46464646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.351453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.084 [2024-07-25 11:53:42.351508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:46464646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.084 [2024-07-25 11:53:42.351521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.084 #35 NEW cov: 12200 ft: 13674 corp: 6/128b lim: 40 exec/s: 0 rss: 72Mb L: 25/29 MS: 1 InsertRepeatedBytes- 00:07:05.344 [2024-07-25 11:53:42.391328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.391371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.344 [2024-07-25 11:53:42.391442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.391456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.344 #36 NEW cov: 12200 ft: 13738 corp: 7/150b lim: 40 exec/s: 0 rss: 72Mb L: 22/29 MS: 1 CrossOver- 00:07:05.344 [2024-07-25 11:53:42.441480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.441510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.344 [2024-07-25 11:53:42.441567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.441581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.344 #37 NEW cov: 12200 ft: 13779 corp: 8/172b lim: 40 exec/s: 0 rss: 72Mb L: 22/29 MS: 1 ShuffleBytes- 00:07:05.344 [2024-07-25 11:53:42.481604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.481629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.344 [2024-07-25 11:53:42.481684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:001dfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.481698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.344 #38 NEW cov: 12200 ft: 13871 corp: 9/194b lim: 40 exec/s: 0 rss: 72Mb L: 22/29 MS: 1 CopyPart- 00:07:05.344 [2024-07-25 11:53:42.531714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.531744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.344 [2024-07-25 11:53:42.531804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.531818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.344 #39 NEW cov: 12200 ft: 13913 corp: 10/216b lim: 40 exec/s: 0 rss: 72Mb L: 22/29 MS: 1 ShuffleBytes- 00:07:05.344 [2024-07-25 11:53:42.571823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.571847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.344 [2024-07-25 11:53:42.571904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.571918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.344 #40 NEW cov: 12200 ft: 13942 corp: 11/237b lim: 40 exec/s: 0 rss: 72Mb L: 21/29 MS: 1 EraseBytes- 00:07:05.344 [2024-07-25 11:53:42.621949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.621975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.344 [2024-07-25 11:53:42.622048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:001dfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.344 [2024-07-25 11:53:42.622063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.604 #41 NEW cov: 12200 ft: 13953 corp: 12/259b lim: 40 exec/s: 0 rss: 73Mb L: 22/29 MS: 1 ShuffleBytes- 00:07:05.604 [2024-07-25 11:53:42.672112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fc4cfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.672141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.604 [2024-07-25 11:53:42.672196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fc001dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.672210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.604 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:05.604 #42 NEW cov: 12217 ft: 14013 corp: 13/282b lim: 40 exec/s: 0 rss: 73Mb L: 23/29 MS: 1 InsertByte- 00:07:05.604 [2024-07-25 11:53:42.722262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfc0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.722288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.604 [2024-07-25 11:53:42.722345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:001dfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.722359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.604 #43 NEW cov: 12217 ft: 14081 corp: 14/300b lim: 40 exec/s: 0 rss: 73Mb L: 18/29 MS: 1 CrossOver- 00:07:05.604 [2024-07-25 11:53:42.762365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fc4cfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.762390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.604 [2024-07-25 11:53:42.762447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fc001dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.762461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.604 #44 NEW cov: 12217 ft: 14106 corp: 15/322b lim: 40 exec/s: 0 rss: 73Mb L: 22/29 MS: 1 EraseBytes- 00:07:05.604 [2024-07-25 11:53:42.812480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.812506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.604 [2024-07-25 11:53:42.812562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.812577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.604 #45 NEW cov: 12217 ft: 14127 corp: 16/344b lim: 40 exec/s: 45 rss: 73Mb L: 22/29 MS: 1 ShuffleBytes- 00:07:05.604 [2024-07-25 11:53:42.852618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfc7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.852645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.604 [2024-07-25 11:53:42.852700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.852714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.604 #46 NEW cov: 12217 ft: 14150 corp: 17/366b lim: 40 exec/s: 46 rss: 73Mb L: 22/29 MS: 1 ChangeByte- 00:07:05.604 [2024-07-25 11:53:42.902879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfc0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.902909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.604 [2024-07-25 11:53:42.902968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.902983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.604 [2024-07-25 11:53:42.903038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00001dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.604 [2024-07-25 11:53:42.903051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.863 #47 NEW cov: 12217 ft: 14196 corp: 18/393b lim: 40 exec/s: 47 rss: 73Mb L: 27/29 MS: 1 InsertRepeatedBytes- 00:07:05.863 [2024-07-25 11:53:42.953044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a2e0000 cdw11:001dfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:42.953071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.863 [2024-07-25 11:53:42.953129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fc3afcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:42.953143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.863 [2024-07-25 11:53:42.953200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:42.953213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.863 #51 NEW cov: 12217 ft: 14218 corp: 19/418b lim: 40 exec/s: 51 rss: 73Mb L: 25/29 MS: 4 ChangeBit-InsertByte-InsertByte-CrossOver- 00:07:05.863 [2024-07-25 11:53:42.993145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfc0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:42.993171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.863 [2024-07-25 11:53:42.993228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:7f91800f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:42.993242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.863 [2024-07-25 11:53:42.993298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b9d51dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:42.993312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.863 #52 NEW cov: 12217 ft: 14223 corp: 20/445b lim: 40 exec/s: 52 rss: 73Mb L: 27/29 MS: 1 CMP- DE: "\001\000\177\221\200\017\271\325"- 00:07:05.863 [2024-07-25 11:53:43.043054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:43.043081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.863 #53 NEW cov: 12217 ft: 14565 corp: 21/454b lim: 40 exec/s: 53 rss: 73Mb L: 9/29 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:05.863 [2024-07-25 11:53:43.083343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfc0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.863 [2024-07-25 11:53:43.083372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.864 [2024-07-25 11:53:43.083429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:3b91800f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.864 [2024-07-25 11:53:43.083443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.864 [2024-07-25 11:53:43.083498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b9d51dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.864 [2024-07-25 11:53:43.083511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.864 #54 NEW cov: 12217 ft: 14592 corp: 22/481b lim: 40 exec/s: 54 rss: 73Mb L: 27/29 MS: 1 ChangeByte- 00:07:05.864 [2024-07-25 11:53:43.133406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:f84cfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.864 [2024-07-25 11:53:43.133433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.864 [2024-07-25 11:53:43.133492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fc001dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.864 [2024-07-25 11:53:43.133505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.123 #55 NEW cov: 12217 ft: 14597 corp: 23/503b lim: 40 exec/s: 55 rss: 73Mb L: 22/29 MS: 1 ChangeBit- 00:07:06.123 [2024-07-25 11:53:43.183422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:400affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.183448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.123 #58 NEW cov: 12217 ft: 14625 corp: 24/513b lim: 40 exec/s: 58 rss: 73Mb L: 10/29 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:06.123 [2024-07-25 11:53:43.223605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:d64cfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.223631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.123 [2024-07-25 11:53:43.223690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fc001dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.223704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.123 #59 NEW cov: 12217 ft: 14657 corp: 25/536b lim: 40 exec/s: 59 rss: 73Mb L: 23/29 MS: 1 ChangeByte- 00:07:06.123 [2024-07-25 11:53:43.263785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.263811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.123 [2024-07-25 11:53:43.263868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:001dfcfd cdw11:00fcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.263882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.123 #60 NEW cov: 12217 ft: 14703 corp: 26/558b lim: 40 exec/s: 60 rss: 73Mb L: 22/29 MS: 1 ChangeBinInt- 00:07:06.123 [2024-07-25 11:53:43.303851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000028 cdw11:1dfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.303879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.123 [2024-07-25 11:53:43.303939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.303952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.123 #61 NEW cov: 12217 ft: 14719 corp: 27/581b lim: 40 exec/s: 61 rss: 73Mb L: 23/29 MS: 1 InsertByte- 00:07:06.123 [2024-07-25 11:53:43.344052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000fcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.344077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.123 [2024-07-25 11:53:43.344135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.344149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.123 #62 NEW cov: 12217 ft: 14740 corp: 28/597b lim: 40 exec/s: 62 rss: 73Mb L: 16/29 MS: 1 EraseBytes- 00:07:06.123 [2024-07-25 11:53:43.384110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:d64cfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.384135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.123 [2024-07-25 11:53:43.384193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fc001dfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.123 [2024-07-25 11:53:43.384206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.123 #63 NEW cov: 12217 ft: 14773 corp: 29/620b lim: 40 exec/s: 63 rss: 73Mb L: 23/29 MS: 1 CopyPart- 00:07:06.381 [2024-07-25 11:53:43.434257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.434284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.434341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00fcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.434355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.381 #64 NEW cov: 12217 ft: 14797 corp: 30/642b lim: 40 exec/s: 64 rss: 73Mb L: 22/29 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:06.381 [2024-07-25 11:53:43.474483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfc0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.474509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.474567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000198 cdw11:007f9180 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.474581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.474638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0fb9d51d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.474651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.381 #65 NEW cov: 12217 ft: 14821 corp: 31/670b lim: 40 exec/s: 65 rss: 73Mb L: 28/29 MS: 1 InsertByte- 00:07:06.381 [2024-07-25 11:53:43.514643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:46464646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.514668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.514727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:46464646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.514745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.514819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:46464646 cdw11:46254646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.514833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.381 #66 NEW cov: 12217 ft: 14824 corp: 32/695b lim: 40 exec/s: 66 rss: 73Mb L: 25/29 MS: 1 ChangeByte- 00:07:06.381 [2024-07-25 11:53:43.565083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fc00001d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.565109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.565166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:001dfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.565180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.565237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.565250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.565309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:fcfc001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.565323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.565380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.565394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.381 #67 NEW cov: 12217 ft: 15299 corp: 33/735b lim: 40 exec/s: 67 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:07:06.381 [2024-07-25 11:53:43.604890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:463a4646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.604916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.604973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:46464646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.604986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.605041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:46464646 cdw11:46464646 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.605055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.381 #68 NEW cov: 12217 ft: 15316 corp: 34/761b lim: 40 exec/s: 68 rss: 73Mb L: 26/40 MS: 1 InsertByte- 00:07:06.381 [2024-07-25 11:53:43.644988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a2e0000 cdw11:001dfcff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.645014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.645069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:010000fc cdw11:fc3afcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.645083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.381 [2024-07-25 11:53:43.645139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.381 [2024-07-25 11:53:43.645152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.381 #69 NEW cov: 12217 ft: 15321 corp: 35/786b lim: 40 exec/s: 69 rss: 74Mb L: 25/40 MS: 1 CMP- DE: "\377\001\000\000"- 00:07:06.640 [2024-07-25 11:53:43.695002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fc0cfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.695026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.640 [2024-07-25 11:53:43.695082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.695095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.640 #70 NEW cov: 12224 ft: 15378 corp: 36/808b lim: 40 exec/s: 70 rss: 74Mb L: 22/40 MS: 1 ChangeBinInt- 00:07:06.640 [2024-07-25 11:53:43.745553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fc00001d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.745580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.640 [2024-07-25 11:53:43.745636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fcfcfcfc cdw11:001dfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.745651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.640 [2024-07-25 11:53:43.745707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.745720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.640 [2024-07-25 11:53:43.745779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:fcfc001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.745794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.640 [2024-07-25 11:53:43.745849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:fcfcfcfc cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.745862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.640 #71 NEW cov: 12224 ft: 15381 corp: 37/848b lim: 40 exec/s: 71 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:06.640 [2024-07-25 11:53:43.795195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001d cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.640 [2024-07-25 11:53:43.795223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.640 #72 NEW cov: 12224 ft: 15385 corp: 38/862b lim: 40 exec/s: 36 rss: 74Mb L: 14/40 MS: 1 EraseBytes- 00:07:06.640 #72 DONE cov: 12224 ft: 15385 corp: 38/862b lim: 40 exec/s: 36 rss: 74Mb 00:07:06.640 ###### Recommended dictionary. ###### 00:07:06.640 "\001\000\177\221\200\017\271\325" # Uses: 0 00:07:06.640 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:06.640 "\377\001\000\000" # Uses: 0 00:07:06.640 ###### End of recommended dictionary. ###### 00:07:06.640 Done 72 runs in 2 second(s) 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.899 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:06.900 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:06.900 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:06.900 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:06.900 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.900 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.900 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.900 11:53:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:06.900 [2024-07-25 11:53:43.995428] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:06.900 [2024-07-25 11:53:43.995503] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907093 ] 00:07:06.900 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.159 [2024-07-25 11:53:44.211993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.159 [2024-07-25 11:53:44.283728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.159 [2024-07-25 11:53:44.343207] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.159 [2024-07-25 11:53:44.359520] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:07.159 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.159 INFO: Seed: 1631307176 00:07:07.159 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:07.159 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:07.159 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:07.159 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.159 #2 INITED exec/s: 0 rss: 65Mb 00:07:07.159 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.159 This may also happen if the target rejected all inputs we tried so far 00:07:07.159 [2024-07-25 11:53:44.436648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.159 [2024-07-25 11:53:44.436690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.677 NEW_FUNC[1/701]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:07.677 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:07.677 #7 NEW cov: 11972 ft: 11971 corp: 2/11b lim: 35 exec/s: 0 rss: 72Mb L: 10/10 MS: 5 ShuffleBytes-InsertByte-ChangeBit-ChangeByte-CMP- DE: "\000\032\015\324\321\034\370\336"- 00:07:07.677 [2024-07-25 11:53:44.797740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.677 [2024-07-25 11:53:44.797781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.677 [2024-07-25 11:53:44.797874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.677 [2024-07-25 11:53:44.797893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.677 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:07.677 #8 NEW cov: 12119 ft: 13326 corp: 3/25b lim: 35 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:07:07.677 [2024-07-25 11:53:44.857885] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.678 [2024-07-25 11:53:44.857922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.678 [2024-07-25 11:53:44.858017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.678 [2024-07-25 11:53:44.858039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.678 #9 NEW cov: 12125 ft: 13615 corp: 4/39b lim: 35 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 ChangeBinInt- 00:07:07.678 [2024-07-25 11:53:44.928337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.678 [2024-07-25 11:53:44.928368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.678 [2024-07-25 11:53:44.928457] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.678 [2024-07-25 11:53:44.928476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.678 #10 NEW cov: 12210 ft: 13825 corp: 5/53b lim: 35 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 PersAutoDict- DE: "\000\032\015\324\321\034\370\336"- 00:07:07.678 [2024-07-25 11:53:44.978278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.678 [2024-07-25 11:53:44.978308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.937 #11 NEW cov: 12210 ft: 13895 corp: 6/63b lim: 35 exec/s: 0 rss: 72Mb L: 10/14 MS: 1 ChangeByte- 00:07:07.937 [2024-07-25 11:53:45.038947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.038975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.937 [2024-07-25 11:53:45.039078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.039094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.937 #15 NEW cov: 12210 ft: 14011 corp: 7/82b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 4 EraseBytes-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:07:07.937 [2024-07-25 11:53:45.089201] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.089229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.937 [2024-07-25 11:53:45.089323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.089340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.937 #16 NEW cov: 12210 ft: 14064 corp: 8/96b lim: 35 exec/s: 0 rss: 72Mb L: 14/19 MS: 1 ChangeByte- 00:07:07.937 [2024-07-25 11:53:45.159891] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.159917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.937 [2024-07-25 11:53:45.160015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.160033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.937 [2024-07-25 11:53:45.160121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST MEM BUFFER cid:6 cdw10:8000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.160138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.937 #17 NEW cov: 12210 ft: 14419 corp: 9/118b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 PersAutoDict- DE: "\000\032\015\324\321\034\370\336"- 00:07:07.937 [2024-07-25 11:53:45.210299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.937 [2024-07-25 11:53:45.210325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.937 [2024-07-25 11:53:45.210412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.938 [2024-07-25 11:53:45.210427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.938 NEW_FUNC[1/1]: 0x11f7bd0 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:07:07.938 #18 NEW cov: 12233 ft: 14542 corp: 10/142b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:08.197 [2024-07-25 11:53:45.270764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.270791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.197 [2024-07-25 11:53:45.270886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.270902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.197 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:08.197 #19 NEW cov: 12256 ft: 14608 corp: 11/167b lim: 35 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 InsertByte- 00:07:08.197 [2024-07-25 11:53:45.341278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.341305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.197 [2024-07-25 11:53:45.341403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.341417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.197 #20 NEW cov: 12256 ft: 14649 corp: 12/192b lim: 35 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 InsertByte- 00:07:08.197 [2024-07-25 11:53:45.391637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.391663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.197 [2024-07-25 11:53:45.391768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.391785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.197 #21 NEW cov: 12256 ft: 14750 corp: 13/217b lim: 35 exec/s: 21 rss: 72Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:08.197 [2024-07-25 11:53:45.461336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.461361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.197 [2024-07-25 11:53:45.461468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.197 [2024-07-25 11:53:45.461482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.197 #22 NEW cov: 12256 ft: 14786 corp: 14/235b lim: 35 exec/s: 22 rss: 72Mb L: 18/25 MS: 1 PersAutoDict- DE: "\000\032\015\324\321\034\370\336"- 00:07:08.456 [2024-07-25 11:53:45.512261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.456 [2024-07-25 11:53:45.512289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.456 [2024-07-25 11:53:45.512380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.456 [2024-07-25 11:53:45.512396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.456 #23 NEW cov: 12256 ft: 14803 corp: 15/259b lim: 35 exec/s: 23 rss: 72Mb L: 24/25 MS: 1 ChangeByte- 00:07:08.456 [2024-07-25 11:53:45.562899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.456 [2024-07-25 11:53:45.562925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.456 [2024-07-25 11:53:45.563024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.456 [2024-07-25 11:53:45.563040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.456 [2024-07-25 11:53:45.563130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.456 [2024-07-25 11:53:45.563148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.456 #24 NEW cov: 12256 ft: 15001 corp: 16/290b lim: 35 exec/s: 24 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:08.456 [2024-07-25 11:53:45.612324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.612351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.457 [2024-07-25 11:53:45.612453] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000de SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.612471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.457 #25 NEW cov: 12256 ft: 15024 corp: 17/304b lim: 35 exec/s: 25 rss: 73Mb L: 14/31 MS: 1 PersAutoDict- DE: "\000\032\015\324\321\034\370\336"- 00:07:08.457 [2024-07-25 11:53:45.683462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.683489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.457 [2024-07-25 11:53:45.683584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.683600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.457 [2024-07-25 11:53:45.683700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.683717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.457 #26 NEW cov: 12256 ft: 15113 corp: 18/332b lim: 35 exec/s: 26 rss: 73Mb L: 28/31 MS: 1 InsertRepeatedBytes- 00:07:08.457 [2024-07-25 11:53:45.753267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.753296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.457 [2024-07-25 11:53:45.753388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.753407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.457 [2024-07-25 11:53:45.753501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.457 [2024-07-25 11:53:45.753518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.716 #27 NEW cov: 12256 ft: 15128 corp: 19/359b lim: 35 exec/s: 27 rss: 73Mb L: 27/31 MS: 1 PersAutoDict- DE: "\000\032\015\324\321\034\370\336"- 00:07:08.716 [2024-07-25 11:53:45.824077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.824108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.716 [2024-07-25 11:53:45.824210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.824227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.716 [2024-07-25 11:53:45.824329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000de SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.824346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.716 [2024-07-25 11:53:45.824446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.824463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.716 #28 NEW cov: 12256 ft: 15277 corp: 20/389b lim: 35 exec/s: 28 rss: 73Mb L: 30/31 MS: 1 CrossOver- 00:07:08.716 [2024-07-25 11:53:45.894341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.894371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.716 [2024-07-25 11:53:45.894469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.894486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.716 #29 NEW cov: 12256 ft: 15295 corp: 21/413b lim: 35 exec/s: 29 rss: 73Mb L: 24/31 MS: 1 ChangeBinInt- 00:07:08.716 [2024-07-25 11:53:45.943684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.943714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.716 #30 NEW cov: 12256 ft: 15334 corp: 22/423b lim: 35 exec/s: 30 rss: 73Mb L: 10/31 MS: 1 CrossOver- 00:07:08.716 [2024-07-25 11:53:45.994477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.994508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.716 [2024-07-25 11:53:45.994603] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.994620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.716 [2024-07-25 11:53:45.994707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST MEM BUFFER cid:6 cdw10:8000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.716 [2024-07-25 11:53:45.994726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.975 #31 NEW cov: 12256 ft: 15344 corp: 23/447b lim: 35 exec/s: 31 rss: 73Mb L: 24/31 MS: 1 CMP- DE: "\377\377"- 00:07:08.975 [2024-07-25 11:53:46.065574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.065601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.975 [2024-07-25 11:53:46.065690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:6 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.065707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMMAND SEQUENCE ERROR (00/0c) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.975 [2024-07-25 11:53:46.065805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.065820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.975 NEW_FUNC[1/1]: 0x4b6ee0 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:07:08.975 #32 NEW cov: 12290 ft: 15418 corp: 24/478b lim: 35 exec/s: 32 rss: 73Mb L: 31/31 MS: 1 CopyPart- 00:07:08.975 [2024-07-25 11:53:46.134810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.134842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.975 #33 NEW cov: 12290 ft: 15433 corp: 25/488b lim: 35 exec/s: 33 rss: 73Mb L: 10/31 MS: 1 ChangeByte- 00:07:08.975 [2024-07-25 11:53:46.196253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000041 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.196278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.975 [2024-07-25 11:53:46.196362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.196378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.975 [2024-07-25 11:53:46.196481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.196498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.975 [2024-07-25 11:53:46.196594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.196609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.975 #34 NEW cov: 12290 ft: 15448 corp: 26/521b lim: 35 exec/s: 34 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:07:08.975 [2024-07-25 11:53:46.266160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.266185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.975 [2024-07-25 11:53:46.266280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.975 [2024-07-25 11:53:46.266296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.235 #40 NEW cov: 12290 ft: 15458 corp: 27/543b lim: 35 exec/s: 40 rss: 73Mb L: 22/33 MS: 1 EraseBytes- 00:07:09.235 [2024-07-25 11:53:46.316566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.235 [2024-07-25 11:53:46.316596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.235 [2024-07-25 11:53:46.316688] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.235 [2024-07-25 11:53:46.316705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.235 #41 NEW cov: 12290 ft: 15515 corp: 28/567b lim: 35 exec/s: 41 rss: 73Mb L: 24/33 MS: 1 PersAutoDict- DE: "\377\377"- 00:07:09.235 [2024-07-25 11:53:46.366854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.235 [2024-07-25 11:53:46.366880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.235 [2024-07-25 11:53:46.366972] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.235 [2024-07-25 11:53:46.366988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.235 #42 NEW cov: 12290 ft: 15541 corp: 29/589b lim: 35 exec/s: 42 rss: 73Mb L: 22/33 MS: 1 EraseBytes- 00:07:09.235 NEW_FUNC[1/2]: 0x4b3c50 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:07:09.235 NEW_FUNC[2/2]: 0x11f1070 in nvmf_ctrlr_set_features_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1617 00:07:09.235 #45 NEW cov: 12340 ft: 15620 corp: 30/599b lim: 35 exec/s: 22 rss: 73Mb L: 10/33 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:09.235 #45 DONE cov: 12340 ft: 15620 corp: 30/599b lim: 35 exec/s: 22 rss: 73Mb 00:07:09.235 ###### Recommended dictionary. ###### 00:07:09.235 "\000\032\015\324\321\034\370\336" # Uses: 5 00:07:09.235 "\377\377" # Uses: 3 00:07:09.235 ###### End of recommended dictionary. ###### 00:07:09.235 Done 45 runs in 2 second(s) 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.495 11:53:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:09.495 [2024-07-25 11:53:46.608862] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:09.495 [2024-07-25 11:53:46.608938] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907464 ] 00:07:09.495 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.754 [2024-07-25 11:53:46.817683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.754 [2024-07-25 11:53:46.887914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.754 [2024-07-25 11:53:46.947336] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.754 [2024-07-25 11:53:46.963627] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:09.754 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.754 INFO: Seed: 4233297409 00:07:09.754 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:09.754 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:09.754 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:09.754 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.754 #2 INITED exec/s: 0 rss: 64Mb 00:07:09.755 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.755 This may also happen if the target rejected all inputs we tried so far 00:07:09.755 [2024-07-25 11:53:47.022460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.755 [2024-07-25 11:53:47.022493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.755 [2024-07-25 11:53:47.022551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000498 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.755 [2024-07-25 11:53:47.022565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.273 NEW_FUNC[1/700]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:10.273 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.273 #6 NEW cov: 11966 ft: 11960 corp: 2/19b lim: 35 exec/s: 0 rss: 72Mb L: 18/18 MS: 4 InsertByte-CrossOver-InsertByte-InsertRepeatedBytes- 00:07:10.273 [2024-07-25 11:53:47.363728] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.273 [2024-07-25 11:53:47.363797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.273 [2024-07-25 11:53:47.363887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000498 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.273 [2024-07-25 11:53:47.363915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.273 [2024-07-25 11:53:47.364003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ed SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.273 [2024-07-25 11:53:47.364029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.273 [2024-07-25 11:53:47.364114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ed SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.273 [2024-07-25 11:53:47.364140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.273 #7 NEW cov: 12091 ft: 13193 corp: 3/53b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:10.273 [2024-07-25 11:53:47.423456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000bd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.423483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.274 [2024-07-25 11:53:47.423542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.423556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.274 [2024-07-25 11:53:47.423614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.423628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.274 #10 NEW cov: 12097 ft: 13655 corp: 4/79b lim: 35 exec/s: 0 rss: 72Mb L: 26/34 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:10.274 [2024-07-25 11:53:47.463607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.463634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.274 [2024-07-25 11:53:47.463698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.463713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.274 NEW_FUNC[1/1]: 0x4b3c50 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:07:10.274 #13 NEW cov: 12205 ft: 13924 corp: 5/105b lim: 35 exec/s: 0 rss: 72Mb L: 26/34 MS: 3 ChangeBit-ShuffleBytes-CrossOver- 00:07:10.274 [2024-07-25 11:53:47.503710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.503742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.274 [2024-07-25 11:53:47.503817] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.503832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.274 #14 NEW cov: 12205 ft: 14031 corp: 6/131b lim: 35 exec/s: 0 rss: 72Mb L: 26/34 MS: 1 ChangeBinInt- 00:07:10.274 [2024-07-25 11:53:47.553987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.274 [2024-07-25 11:53:47.554013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.533 NEW_FUNC[1/1]: 0x4b6a10 in feat_volatile_write_cache /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:312 00:07:10.533 #17 NEW cov: 12219 ft: 14509 corp: 7/160b lim: 35 exec/s: 0 rss: 72Mb L: 29/34 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:10.533 [2024-07-25 11:53:47.594224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.533 [2024-07-25 11:53:47.594251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.533 [2024-07-25 11:53:47.594311] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.533 [2024-07-25 11:53:47.594325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.533 #18 NEW cov: 12219 ft: 14628 corp: 8/195b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:10.533 [2024-07-25 11:53:47.643937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.533 [2024-07-25 11:53:47.643962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.533 [2024-07-25 11:53:47.644025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000001c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.533 [2024-07-25 11:53:47.644039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.533 #20 NEW cov: 12219 ft: 14649 corp: 9/210b lim: 35 exec/s: 0 rss: 73Mb L: 15/35 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:10.533 [2024-07-25 11:53:47.684094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.533 [2024-07-25 11:53:47.684119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.533 #26 NEW cov: 12219 ft: 14698 corp: 10/227b lim: 35 exec/s: 0 rss: 73Mb L: 17/35 MS: 1 EraseBytes- 00:07:10.533 [2024-07-25 11:53:47.734323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.533 [2024-07-25 11:53:47.734350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.533 [2024-07-25 11:53:47.734426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.534 [2024-07-25 11:53:47.734441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.534 #27 NEW cov: 12219 ft: 14774 corp: 11/254b lim: 35 exec/s: 0 rss: 73Mb L: 27/35 MS: 1 InsertByte- 00:07:10.534 [2024-07-25 11:53:47.774674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.534 [2024-07-25 11:53:47.774700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.534 [2024-07-25 11:53:47.774764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.534 [2024-07-25 11:53:47.774779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.534 [2024-07-25 11:53:47.774962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000000d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.534 [2024-07-25 11:53:47.774977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.534 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:10.534 #28 NEW cov: 12233 ft: 14823 corp: 12/289b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:07:10.534 [2024-07-25 11:53:47.824315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.534 [2024-07-25 11:53:47.824341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.793 #32 NEW cov: 12233 ft: 15072 corp: 13/301b lim: 35 exec/s: 0 rss: 73Mb L: 12/35 MS: 4 ChangeBinInt-ChangeByte-ChangeBit-CrossOver- 00:07:10.793 [2024-07-25 11:53:47.864414] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000003d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.793 [2024-07-25 11:53:47.864439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.793 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:10.793 #33 NEW cov: 12256 ft: 15102 corp: 14/314b lim: 35 exec/s: 0 rss: 73Mb L: 13/35 MS: 1 InsertByte- 00:07:10.793 [2024-07-25 11:53:47.924548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.793 [2024-07-25 11:53:47.924573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.793 #34 NEW cov: 12256 ft: 15140 corp: 15/327b lim: 35 exec/s: 0 rss: 73Mb L: 13/35 MS: 1 ChangeBit- 00:07:10.793 [2024-07-25 11:53:47.975138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.793 [2024-07-25 11:53:47.975164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.793 #35 NEW cov: 12256 ft: 15166 corp: 16/356b lim: 35 exec/s: 0 rss: 73Mb L: 29/35 MS: 1 CrossOver- 00:07:10.793 [2024-07-25 11:53:48.014955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.793 [2024-07-25 11:53:48.014980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.793 [2024-07-25 11:53:48.015041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.793 [2024-07-25 11:53:48.015058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.793 #36 NEW cov: 12256 ft: 15221 corp: 17/370b lim: 35 exec/s: 36 rss: 73Mb L: 14/35 MS: 1 InsertByte- 00:07:10.793 [2024-07-25 11:53:48.064947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.793 [2024-07-25 11:53:48.064973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.793 #37 NEW cov: 12256 ft: 15284 corp: 18/383b lim: 35 exec/s: 37 rss: 73Mb L: 13/35 MS: 1 ChangeByte- 00:07:11.052 [2024-07-25 11:53:48.105412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.105438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.052 [2024-07-25 11:53:48.105515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.105531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.052 #38 NEW cov: 12256 ft: 15307 corp: 19/410b lim: 35 exec/s: 38 rss: 73Mb L: 27/35 MS: 1 InsertByte- 00:07:11.052 [2024-07-25 11:53:48.145296] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.145322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.052 [2024-07-25 11:53:48.145380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.145394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.052 #39 NEW cov: 12256 ft: 15317 corp: 20/424b lim: 35 exec/s: 39 rss: 73Mb L: 14/35 MS: 1 InsertByte- 00:07:11.052 [2024-07-25 11:53:48.185522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000bd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.185547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.052 [2024-07-25 11:53:48.185605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.185619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.052 [2024-07-25 11:53:48.185694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.185709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.052 #40 NEW cov: 12256 ft: 15341 corp: 21/451b lim: 35 exec/s: 40 rss: 73Mb L: 27/35 MS: 1 InsertByte- 00:07:11.052 [2024-07-25 11:53:48.235970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.052 [2024-07-25 11:53:48.235994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.236056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.236070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.236193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000001f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.236210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.236273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:0000002f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.236286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.053 #41 NEW cov: 12256 ft: 15390 corp: 22/486b lim: 35 exec/s: 41 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:11.053 [2024-07-25 11:53:48.285996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.286023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.286086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000498 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.286100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.286162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ed SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.286176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.286237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ed SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.286250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.053 #42 NEW cov: 12256 ft: 15407 corp: 23/520b lim: 35 exec/s: 42 rss: 73Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:11.053 [2024-07-25 11:53:48.336224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.336251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.336312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.336326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.336447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000001f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.336462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.053 [2024-07-25 11:53:48.336520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:0000002f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.053 [2024-07-25 11:53:48.336533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.312 #43 NEW cov: 12256 ft: 15435 corp: 24/555b lim: 35 exec/s: 43 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:11.312 [2024-07-25 11:53:48.386148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.312 [2024-07-25 11:53:48.386175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.312 [2024-07-25 11:53:48.386237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.312 [2024-07-25 11:53:48.386252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.312 #44 NEW cov: 12256 ft: 15446 corp: 25/577b lim: 35 exec/s: 44 rss: 73Mb L: 22/35 MS: 1 EraseBytes- 00:07:11.312 [2024-07-25 11:53:48.426237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.312 [2024-07-25 11:53:48.426263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.312 #45 NEW cov: 12256 ft: 15462 corp: 26/601b lim: 35 exec/s: 45 rss: 73Mb L: 24/35 MS: 1 EraseBytes- 00:07:11.312 [2024-07-25 11:53:48.476400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.312 [2024-07-25 11:53:48.476427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.312 #46 NEW cov: 12256 ft: 15473 corp: 27/625b lim: 35 exec/s: 46 rss: 74Mb L: 24/35 MS: 1 ShuffleBytes- 00:07:11.312 #47 NEW cov: 12256 ft: 15520 corp: 28/637b lim: 35 exec/s: 47 rss: 74Mb L: 12/35 MS: 1 EraseBytes- 00:07:11.312 [2024-07-25 11:53:48.576802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.312 [2024-07-25 11:53:48.576828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.312 [2024-07-25 11:53:48.576906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.312 [2024-07-25 11:53:48.576921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.312 [2024-07-25 11:53:48.577039] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.312 [2024-07-25 11:53:48.577054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.312 #48 NEW cov: 12256 ft: 15538 corp: 29/671b lim: 35 exec/s: 48 rss: 74Mb L: 34/35 MS: 1 EraseBytes- 00:07:11.571 [2024-07-25 11:53:48.616908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.616935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.571 #49 NEW cov: 12256 ft: 15575 corp: 30/700b lim: 35 exec/s: 49 rss: 74Mb L: 29/35 MS: 1 ChangeByte- 00:07:11.571 [2024-07-25 11:53:48.656611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.656638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.571 #50 NEW cov: 12256 ft: 15629 corp: 31/712b lim: 35 exec/s: 50 rss: 74Mb L: 12/35 MS: 1 ChangeByte- 00:07:11.571 [2024-07-25 11:53:48.707172] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.707199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.571 #51 NEW cov: 12256 ft: 15640 corp: 32/745b lim: 35 exec/s: 51 rss: 74Mb L: 33/35 MS: 1 CopyPart- 00:07:11.571 [2024-07-25 11:53:48.757037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.757063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.571 [2024-07-25 11:53:48.757126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006e4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.757140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.571 #52 NEW cov: 12256 ft: 15651 corp: 33/760b lim: 35 exec/s: 52 rss: 74Mb L: 15/35 MS: 1 InsertByte- 00:07:11.571 [2024-07-25 11:53:48.807567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.807596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.571 [2024-07-25 11:53:48.807659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000498 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.807673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.571 [2024-07-25 11:53:48.807730] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ed SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.807748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.571 [2024-07-25 11:53:48.807807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ed SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.807821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.571 [2024-07-25 11:53:48.807881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000498 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.807895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.571 #53 NEW cov: 12256 ft: 15683 corp: 34/795b lim: 35 exec/s: 53 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:11.571 [2024-07-25 11:53:48.857582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.571 [2024-07-25 11:53:48.857607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.830 #54 NEW cov: 12256 ft: 15691 corp: 35/829b lim: 35 exec/s: 54 rss: 74Mb L: 34/35 MS: 1 CopyPart- 00:07:11.830 [2024-07-25 11:53:48.907656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.830 [2024-07-25 11:53:48.907682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.830 [2024-07-25 11:53:48.907745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.830 [2024-07-25 11:53:48.907759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.830 #55 NEW cov: 12256 ft: 15698 corp: 36/855b lim: 35 exec/s: 55 rss: 74Mb L: 26/35 MS: 1 ChangeByte- 00:07:11.830 [2024-07-25 11:53:48.947834] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000bd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.830 [2024-07-25 11:53:48.947859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.830 [2024-07-25 11:53:48.947920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.830 [2024-07-25 11:53:48.947935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.831 [2024-07-25 11:53:48.947995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.831 [2024-07-25 11:53:48.948009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.831 [2024-07-25 11:53:48.948070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.831 [2024-07-25 11:53:48.948083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.831 #56 NEW cov: 12256 ft: 15700 corp: 37/887b lim: 35 exec/s: 56 rss: 74Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:11.831 [2024-07-25 11:53:48.987695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.831 [2024-07-25 11:53:48.987721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.831 #57 NEW cov: 12256 ft: 15720 corp: 38/905b lim: 35 exec/s: 28 rss: 74Mb L: 18/35 MS: 1 InsertByte- 00:07:11.831 #57 DONE cov: 12256 ft: 15720 corp: 38/905b lim: 35 exec/s: 28 rss: 74Mb 00:07:11.831 Done 57 runs in 2 second(s) 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:12.090 11:53:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:12.090 [2024-07-25 11:53:49.189677] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:12.090 [2024-07-25 11:53:49.189761] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907831 ] 00:07:12.090 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.349 [2024-07-25 11:53:49.401591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.349 [2024-07-25 11:53:49.471967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.349 [2024-07-25 11:53:49.531565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.349 [2024-07-25 11:53:49.547866] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:12.349 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.349 INFO: Seed: 2524322401 00:07:12.349 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:12.349 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:12.349 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:12.349 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.349 #2 INITED exec/s: 0 rss: 64Mb 00:07:12.349 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.349 This may also happen if the target rejected all inputs we tried so far 00:07:12.349 [2024-07-25 11:53:49.592500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.349 [2024-07-25 11:53:49.592534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.349 [2024-07-25 11:53:49.592586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.349 [2024-07-25 11:53:49.592604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.867 NEW_FUNC[1/701]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:12.867 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.867 #6 NEW cov: 12082 ft: 12071 corp: 2/55b lim: 105 exec/s: 0 rss: 72Mb L: 54/54 MS: 4 InsertByte-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:07:12.867 [2024-07-25 11:53:49.963449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060674 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.867 [2024-07-25 11:53:49.963500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.867 [2024-07-25 11:53:49.963539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.867 [2024-07-25 11:53:49.963558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.867 #7 NEW cov: 12195 ft: 12709 corp: 3/109b lim: 105 exec/s: 0 rss: 72Mb L: 54/54 MS: 1 ChangeBit- 00:07:12.867 [2024-07-25 11:53:50.053599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451406876852802 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.867 [2024-07-25 11:53:50.053641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.867 [2024-07-25 11:53:50.053685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.867 [2024-07-25 11:53:50.053704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.867 #8 NEW cov: 12201 ft: 12928 corp: 4/164b lim: 105 exec/s: 0 rss: 72Mb L: 55/55 MS: 1 InsertByte- 00:07:12.867 [2024-07-25 11:53:50.113749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16855260271271864809 len:59882 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.867 [2024-07-25 11:53:50.113789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.867 [2024-07-25 11:53:50.113839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16855260271271864809 len:59882 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.867 [2024-07-25 11:53:50.113859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.867 [2024-07-25 11:53:50.113889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16855260271271864809 len:59882 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.867 [2024-07-25 11:53:50.113906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.867 #9 NEW cov: 12286 ft: 13498 corp: 5/230b lim: 105 exec/s: 0 rss: 72Mb L: 66/66 MS: 1 InsertRepeatedBytes- 00:07:13.126 [2024-07-25 11:53:50.173825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361700865079575813 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.126 [2024-07-25 11:53:50.173860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.126 #12 NEW cov: 12286 ft: 14046 corp: 6/256b lim: 105 exec/s: 0 rss: 72Mb L: 26/66 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:13.126 [2024-07-25 11:53:50.234006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.126 [2024-07-25 11:53:50.234040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.126 [2024-07-25 11:53:50.234076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4820894778470318658 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.126 [2024-07-25 11:53:50.234095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.126 #13 NEW cov: 12286 ft: 14208 corp: 7/310b lim: 105 exec/s: 0 rss: 72Mb L: 54/66 MS: 1 ChangeByte- 00:07:13.126 [2024-07-25 11:53:50.294096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060674 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.126 [2024-07-25 11:53:50.294127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.126 [2024-07-25 11:53:50.294176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.126 [2024-07-25 11:53:50.294194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.126 #14 NEW cov: 12286 ft: 14278 corp: 8/370b lim: 105 exec/s: 0 rss: 72Mb L: 60/66 MS: 1 InsertRepeatedBytes- 00:07:13.126 [2024-07-25 11:53:50.374276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361700864190383365 len:1481 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.126 [2024-07-25 11:53:50.374306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.385 #17 NEW cov: 12286 ft: 14361 corp: 9/391b lim: 105 exec/s: 0 rss: 72Mb L: 21/66 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:07:13.385 [2024-07-25 11:53:50.454503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361701281691403525 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.454534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.385 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:13.385 #18 NEW cov: 12303 ft: 14485 corp: 10/417b lim: 105 exec/s: 0 rss: 72Mb L: 26/66 MS: 1 ChangeByte- 00:07:13.385 [2024-07-25 11:53:50.514711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.514746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.385 [2024-07-25 11:53:50.514796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.514814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.385 [2024-07-25 11:53:50.514846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.514867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.385 #19 NEW cov: 12303 ft: 14516 corp: 11/484b lim: 105 exec/s: 0 rss: 72Mb L: 67/67 MS: 1 CopyPart- 00:07:13.385 [2024-07-25 11:53:50.574866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060674 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.574896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.385 [2024-07-25 11:53:50.574944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.574963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.385 [2024-07-25 11:53:50.574994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4774451407313060418 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.575011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.385 #20 NEW cov: 12303 ft: 14550 corp: 12/547b lim: 105 exec/s: 20 rss: 72Mb L: 63/67 MS: 1 CrossOver- 00:07:13.385 [2024-07-25 11:53:50.634920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361700865079575813 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.634949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.385 #21 NEW cov: 12303 ft: 14577 corp: 13/573b lim: 105 exec/s: 21 rss: 72Mb L: 26/67 MS: 1 ChangeByte- 00:07:13.385 [2024-07-25 11:53:50.685181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774452167086064194 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.685214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.385 [2024-07-25 11:53:50.685250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.385 [2024-07-25 11:53:50.685269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.645 #22 NEW cov: 12303 ft: 14610 corp: 14/629b lim: 105 exec/s: 22 rss: 73Mb L: 56/67 MS: 1 InsertByte- 00:07:13.645 [2024-07-25 11:53:50.765257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361807517707470085 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.645 [2024-07-25 11:53:50.765287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.645 #23 NEW cov: 12303 ft: 14629 corp: 15/655b lim: 105 exec/s: 23 rss: 73Mb L: 26/67 MS: 1 CopyPart- 00:07:13.645 [2024-07-25 11:53:50.845507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060674 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.645 [2024-07-25 11:53:50.845538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.645 [2024-07-25 11:53:50.845586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.645 [2024-07-25 11:53:50.845604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.645 #24 NEW cov: 12303 ft: 14680 corp: 16/715b lim: 105 exec/s: 24 rss: 73Mb L: 60/67 MS: 1 ChangeBit- 00:07:13.645 [2024-07-25 11:53:50.925722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060674 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.645 [2024-07-25 11:53:50.925761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.645 [2024-07-25 11:53:50.925811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5447291387854327256 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.645 [2024-07-25 11:53:50.925829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.903 #25 NEW cov: 12303 ft: 14688 corp: 17/769b lim: 105 exec/s: 25 rss: 73Mb L: 54/67 MS: 1 CMP- DE: "\001\032\015\330K\230\252\250"- 00:07:13.903 [2024-07-25 11:53:50.975814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5447291387854327256 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.904 [2024-07-25 11:53:50.975844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.904 #26 NEW cov: 12303 ft: 14709 corp: 18/803b lim: 105 exec/s: 26 rss: 73Mb L: 34/67 MS: 1 PersAutoDict- DE: "\001\032\015\330K\230\252\250"- 00:07:13.904 [2024-07-25 11:53:51.035962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361700864190382341 len:1481 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.904 [2024-07-25 11:53:51.035992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.904 #27 NEW cov: 12303 ft: 14738 corp: 19/824b lim: 105 exec/s: 27 rss: 73Mb L: 21/67 MS: 1 ChangeBit- 00:07:13.904 [2024-07-25 11:53:51.116212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.904 [2024-07-25 11:53:51.116242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.904 [2024-07-25 11:53:51.116291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.904 [2024-07-25 11:53:51.116309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.904 #30 NEW cov: 12303 ft: 14762 corp: 20/870b lim: 105 exec/s: 30 rss: 73Mb L: 46/67 MS: 3 ChangeByte-ChangeBinInt-CrossOver- 00:07:13.904 [2024-07-25 11:53:51.166335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.904 [2024-07-25 11:53:51.166368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.904 [2024-07-25 11:53:51.166417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451409064575554 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.904 [2024-07-25 11:53:51.166436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.163 #31 NEW cov: 12303 ft: 14780 corp: 21/918b lim: 105 exec/s: 31 rss: 73Mb L: 48/67 MS: 1 EraseBytes- 00:07:14.163 [2024-07-25 11:53:51.246542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060674 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.246573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.163 [2024-07-25 11:53:51.246622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.246642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.163 #32 NEW cov: 12303 ft: 14797 corp: 22/978b lim: 105 exec/s: 32 rss: 73Mb L: 60/67 MS: 1 ShuffleBytes- 00:07:14.163 [2024-07-25 11:53:51.326760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774452167086064194 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.326795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.163 [2024-07-25 11:53:51.326844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.326862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.163 #33 NEW cov: 12303 ft: 14841 corp: 23/1034b lim: 105 exec/s: 33 rss: 73Mb L: 56/67 MS: 1 ChangeBit- 00:07:14.163 [2024-07-25 11:53:51.406923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361701281691403525 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.406954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.163 #34 NEW cov: 12303 ft: 14897 corp: 24/1060b lim: 105 exec/s: 34 rss: 73Mb L: 26/67 MS: 1 CopyPart- 00:07:14.163 [2024-07-25 11:53:51.457174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.457207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.163 [2024-07-25 11:53:51.457256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.457274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.163 [2024-07-25 11:53:51.457305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4774451407313060418 len:16963 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.163 [2024-07-25 11:53:51.457322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.422 #35 NEW cov: 12310 ft: 14965 corp: 25/1128b lim: 105 exec/s: 35 rss: 73Mb L: 68/68 MS: 1 InsertByte- 00:07:14.422 [2024-07-25 11:53:51.537261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361701281691403525 len:1286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.422 [2024-07-25 11:53:51.537292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.422 #36 NEW cov: 12310 ft: 14973 corp: 26/1155b lim: 105 exec/s: 36 rss: 73Mb L: 27/68 MS: 1 CrossOver- 00:07:14.422 [2024-07-25 11:53:51.587385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:361700865079575813 len:31238 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.422 [2024-07-25 11:53:51.587415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.422 #37 NEW cov: 12310 ft: 14979 corp: 27/1181b lim: 105 exec/s: 18 rss: 73Mb L: 26/68 MS: 1 ChangeByte- 00:07:14.422 #37 DONE cov: 12310 ft: 14979 corp: 27/1181b lim: 105 exec/s: 18 rss: 73Mb 00:07:14.422 ###### Recommended dictionary. ###### 00:07:14.422 "\001\032\015\330K\230\252\250" # Uses: 1 00:07:14.422 ###### End of recommended dictionary. ###### 00:07:14.422 Done 37 runs in 2 second(s) 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.681 11:53:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:14.681 [2024-07-25 11:53:51.807012] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:14.681 [2024-07-25 11:53:51.807100] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908203 ] 00:07:14.681 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.968 [2024-07-25 11:53:52.017839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.968 [2024-07-25 11:53:52.088579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.968 [2024-07-25 11:53:52.148058] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.968 [2024-07-25 11:53:52.164354] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:14.969 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.969 INFO: Seed: 845354489 00:07:14.969 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:14.969 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:14.969 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:14.969 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.969 #2 INITED exec/s: 0 rss: 64Mb 00:07:14.969 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.969 This may also happen if the target rejected all inputs we tried so far 00:07:14.969 [2024-07-25 11:53:52.230049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.969 [2024-07-25 11:53:52.230082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.969 [2024-07-25 11:53:52.230119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.969 [2024-07-25 11:53:52.230135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.969 [2024-07-25 11:53:52.230193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.969 [2024-07-25 11:53:52.230209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.969 [2024-07-25 11:53:52.230270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.969 [2024-07-25 11:53:52.230286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.487 NEW_FUNC[1/702]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:15.487 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.487 #19 NEW cov: 12083 ft: 12077 corp: 2/116b lim: 120 exec/s: 0 rss: 72Mb L: 115/115 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:15.487 [2024-07-25 11:53:52.581123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070538657791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.581190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.487 [2024-07-25 11:53:52.581275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.581305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.487 [2024-07-25 11:53:52.581383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.581411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.487 [2024-07-25 11:53:52.581492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.581523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.487 #20 NEW cov: 12215 ft: 12634 corp: 3/231b lim: 120 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 ChangeBinInt- 00:07:15.487 [2024-07-25 11:53:52.640952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.640982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.487 [2024-07-25 11:53:52.641020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.641037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.487 [2024-07-25 11:53:52.641093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.641109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.487 [2024-07-25 11:53:52.641164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.641180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.487 #21 NEW cov: 12221 ft: 13024 corp: 4/346b lim: 120 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 ChangeBit- 00:07:15.487 [2024-07-25 11:53:52.681085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.681116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.487 [2024-07-25 11:53:52.681178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.487 [2024-07-25 11:53:52.681195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.488 [2024-07-25 11:53:52.681249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.681265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.488 [2024-07-25 11:53:52.681320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.681336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.488 #22 NEW cov: 12306 ft: 13388 corp: 5/462b lim: 120 exec/s: 0 rss: 72Mb L: 116/116 MS: 1 CrossOver- 00:07:15.488 [2024-07-25 11:53:52.721202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070538657791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.721231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.488 [2024-07-25 11:53:52.721279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.721295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.488 [2024-07-25 11:53:52.721352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.721368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.488 [2024-07-25 11:53:52.721424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.721439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.488 #23 NEW cov: 12306 ft: 13476 corp: 6/578b lim: 120 exec/s: 0 rss: 72Mb L: 116/116 MS: 1 InsertByte- 00:07:15.488 [2024-07-25 11:53:52.771022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12080808861319145383 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.771051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.488 [2024-07-25 11:53:52.771099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.488 [2024-07-25 11:53:52.771115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.747 #28 NEW cov: 12306 ft: 13967 corp: 7/639b lim: 120 exec/s: 0 rss: 72Mb L: 61/116 MS: 5 CrossOver-ChangeByte-CrossOver-CrossOver-InsertRepeatedBytes- 00:07:15.747 [2024-07-25 11:53:52.811246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070475743231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.811276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.811313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.811331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.811387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.811403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.747 #29 NEW cov: 12306 ft: 14337 corp: 8/717b lim: 120 exec/s: 0 rss: 72Mb L: 78/116 MS: 1 CrossOver- 00:07:15.747 [2024-07-25 11:53:52.861417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070475743231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.861446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.861481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.861498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.861556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.861588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.747 #30 NEW cov: 12306 ft: 14365 corp: 9/795b lim: 120 exec/s: 0 rss: 73Mb L: 78/116 MS: 1 ShuffleBytes- 00:07:15.747 [2024-07-25 11:53:52.911706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.911740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.911785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709549567 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.911804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.911861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.911878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.911932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.911949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.747 #31 NEW cov: 12306 ft: 14391 corp: 10/910b lim: 120 exec/s: 0 rss: 73Mb L: 115/116 MS: 1 ChangeBit- 00:07:15.747 [2024-07-25 11:53:52.961877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.961904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.961954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.961970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.747 [2024-07-25 11:53:52.962026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.747 [2024-07-25 11:53:52.962046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.748 [2024-07-25 11:53:52.962101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.748 [2024-07-25 11:53:52.962116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.748 #32 NEW cov: 12306 ft: 14416 corp: 11/1028b lim: 120 exec/s: 0 rss: 73Mb L: 118/118 MS: 1 CopyPart- 00:07:15.748 [2024-07-25 11:53:53.001753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070475743231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.748 [2024-07-25 11:53:53.001780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.748 [2024-07-25 11:53:53.001828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.748 [2024-07-25 11:53:53.001844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.748 [2024-07-25 11:53:53.001899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.748 [2024-07-25 11:53:53.001915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.748 #33 NEW cov: 12306 ft: 14505 corp: 12/1102b lim: 120 exec/s: 0 rss: 73Mb L: 74/118 MS: 1 EraseBytes- 00:07:15.748 [2024-07-25 11:53:53.041778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.748 [2024-07-25 11:53:53.041805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.748 [2024-07-25 11:53:53.041844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.748 [2024-07-25 11:53:53.041860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.007 #34 NEW cov: 12306 ft: 14552 corp: 13/1161b lim: 120 exec/s: 0 rss: 73Mb L: 59/118 MS: 1 EraseBytes- 00:07:16.007 [2024-07-25 11:53:53.081690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.081717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.007 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:16.007 #35 NEW cov: 12329 ft: 15496 corp: 14/1204b lim: 120 exec/s: 0 rss: 73Mb L: 43/118 MS: 1 EraseBytes- 00:07:16.007 [2024-07-25 11:53:53.142347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.142376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.007 [2024-07-25 11:53:53.142422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.142439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.007 [2024-07-25 11:53:53.142509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446646937247000487 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.142524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.007 [2024-07-25 11:53:53.142583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12080808863958804391 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.142598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.007 #36 NEW cov: 12329 ft: 15524 corp: 15/1308b lim: 120 exec/s: 0 rss: 73Mb L: 104/118 MS: 1 CrossOver- 00:07:16.007 [2024-07-25 11:53:53.192509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070538657791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.192536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.007 [2024-07-25 11:53:53.192578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.192594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.007 [2024-07-25 11:53:53.192650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.192667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.007 [2024-07-25 11:53:53.192722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.007 [2024-07-25 11:53:53.192747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.007 #37 NEW cov: 12338 ft: 15553 corp: 16/1425b lim: 120 exec/s: 37 rss: 73Mb L: 117/118 MS: 1 InsertByte- 00:07:16.007 [2024-07-25 11:53:53.242657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.008 [2024-07-25 11:53:53.242684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.008 [2024-07-25 11:53:53.242733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.008 [2024-07-25 11:53:53.242759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.008 [2024-07-25 11:53:53.242813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.008 [2024-07-25 11:53:53.242829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.008 [2024-07-25 11:53:53.242885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.008 [2024-07-25 11:53:53.242901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.008 #38 NEW cov: 12338 ft: 15570 corp: 17/1544b lim: 120 exec/s: 38 rss: 73Mb L: 119/119 MS: 1 CopyPart- 00:07:16.008 [2024-07-25 11:53:53.282553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070475743231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.008 [2024-07-25 11:53:53.282580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.008 [2024-07-25 11:53:53.282617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.008 [2024-07-25 11:53:53.282637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.008 [2024-07-25 11:53:53.282692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.008 [2024-07-25 11:53:53.282708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.272 #39 NEW cov: 12338 ft: 15598 corp: 18/1639b lim: 120 exec/s: 39 rss: 73Mb L: 95/119 MS: 1 CrossOver- 00:07:16.272 [2024-07-25 11:53:53.332754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070475743231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.332781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.332820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.332837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.332891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.332907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.272 #40 NEW cov: 12338 ft: 15600 corp: 19/1713b lim: 120 exec/s: 40 rss: 73Mb L: 74/119 MS: 1 ShuffleBytes- 00:07:16.272 [2024-07-25 11:53:53.383019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070219890687 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.383046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.383096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.383112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.383167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.383182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.383238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.383254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.272 #41 NEW cov: 12338 ft: 15626 corp: 20/1832b lim: 120 exec/s: 41 rss: 73Mb L: 119/119 MS: 1 ChangeBit- 00:07:16.272 [2024-07-25 11:53:53.433039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070475743231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.433066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.433111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.433125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.433197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4294967295 len:96 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.433216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.272 #42 NEW cov: 12338 ft: 15628 corp: 21/1927b lim: 120 exec/s: 42 rss: 73Mb L: 95/119 MS: 1 ChangeBinInt- 00:07:16.272 [2024-07-25 11:53:53.482879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14395694391606364103 len:51144 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.482909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.272 #43 NEW cov: 12338 ft: 15640 corp: 22/1973b lim: 120 exec/s: 43 rss: 74Mb L: 46/119 MS: 1 InsertRepeatedBytes- 00:07:16.272 [2024-07-25 11:53:53.523250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070475743231 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.523278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.523320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.523335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.523393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.523408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.272 #44 NEW cov: 12338 ft: 15683 corp: 23/2051b lim: 120 exec/s: 44 rss: 74Mb L: 78/119 MS: 1 ChangeBit- 00:07:16.272 [2024-07-25 11:53:53.563710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:33925 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.563745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.563794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.563810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.563865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.563881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.563936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.563953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.272 [2024-07-25 11:53:53.564007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.272 [2024-07-25 11:53:53.564023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:16.532 #45 NEW cov: 12338 ft: 15729 corp: 24/2171b lim: 120 exec/s: 45 rss: 74Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:07:16.532 [2024-07-25 11:53:53.603669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.603697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.603742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.603758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.603813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.603827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.603883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.603899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.532 #46 NEW cov: 12338 ft: 15781 corp: 25/2287b lim: 120 exec/s: 46 rss: 74Mb L: 116/120 MS: 1 InsertByte- 00:07:16.532 [2024-07-25 11:53:53.643963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070219890687 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.643990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.644044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.644060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.644115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.644131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.644185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.644200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.644257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.644273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:16.532 #47 NEW cov: 12338 ft: 15791 corp: 26/2407b lim: 120 exec/s: 47 rss: 74Mb L: 120/120 MS: 1 InsertByte- 00:07:16.532 [2024-07-25 11:53:53.693982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070538657791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.694010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.694058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.694074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.694143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.694158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.694216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.694234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.532 #48 NEW cov: 12338 ft: 15819 corp: 27/2522b lim: 120 exec/s: 48 rss: 74Mb L: 115/120 MS: 1 ChangeBinInt- 00:07:16.532 [2024-07-25 11:53:53.733888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18392419399970586623 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.733916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.733952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.733968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.734025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.734040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.532 #49 NEW cov: 12338 ft: 15841 corp: 28/2601b lim: 120 exec/s: 49 rss: 74Mb L: 79/120 MS: 1 InsertByte- 00:07:16.532 [2024-07-25 11:53:53.783911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:281470849586944 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.783938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.783988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.784004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.532 #52 NEW cov: 12338 ft: 15848 corp: 29/2650b lim: 120 exec/s: 52 rss: 74Mb L: 49/120 MS: 3 CMP-InsertByte-InsertRepeatedBytes- DE: "\001\000\000\000\000\000\000H"- 00:07:16.532 [2024-07-25 11:53:53.824318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070538657791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.824346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.824398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.824415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.532 [2024-07-25 11:53:53.824471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.532 [2024-07-25 11:53:53.824486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.533 [2024-07-25 11:53:53.824542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.533 [2024-07-25 11:53:53.824557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.791 #53 NEW cov: 12338 ft: 15849 corp: 30/2765b lim: 120 exec/s: 53 rss: 74Mb L: 115/120 MS: 1 ShuffleBytes- 00:07:16.791 [2024-07-25 11:53:53.874158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12080808861319145383 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.874188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.874251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.874267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.791 #54 NEW cov: 12338 ft: 15858 corp: 31/2826b lim: 120 exec/s: 54 rss: 74Mb L: 61/120 MS: 1 ChangeByte- 00:07:16.791 [2024-07-25 11:53:53.924645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167843584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.924674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.924714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.924731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.924792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.924806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.924863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.924879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.791 #55 NEW cov: 12338 ft: 15872 corp: 32/2939b lim: 120 exec/s: 55 rss: 74Mb L: 113/120 MS: 1 InsertRepeatedBytes- 00:07:16.791 [2024-07-25 11:53:53.974998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070538657791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.975028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.975081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744030759878655 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.975098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.975154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.975169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.975225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.791 [2024-07-25 11:53:53.975241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.791 [2024-07-25 11:53:53.975295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:53.975309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:16.792 #56 NEW cov: 12338 ft: 15883 corp: 33/3059b lim: 120 exec/s: 56 rss: 74Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:07:16.792 [2024-07-25 11:53:54.014842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.014874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.792 [2024-07-25 11:53:54.014911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.014928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.792 [2024-07-25 11:53:54.014983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.014999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.792 [2024-07-25 11:53:54.015054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744070639321087 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.015069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.792 #57 NEW cov: 12338 ft: 15897 corp: 34/3177b lim: 120 exec/s: 57 rss: 74Mb L: 118/120 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000H"- 00:07:16.792 [2024-07-25 11:53:54.064995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12080808861319145383 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.065023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.792 [2024-07-25 11:53:54.065071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.065087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.792 [2024-07-25 11:53:54.065138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:12080808863958804305 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.065153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.792 [2024-07-25 11:53:54.065207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12080808863958804391 len:42920 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.792 [2024-07-25 11:53:54.065224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.051 #58 NEW cov: 12338 ft: 15931 corp: 35/3275b lim: 120 exec/s: 58 rss: 74Mb L: 98/120 MS: 1 CopyPart- 00:07:17.051 [2024-07-25 11:53:54.115278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:33925 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.051 [2024-07-25 11:53:54.115306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.051 [2024-07-25 11:53:54.115364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.051 [2024-07-25 11:53:54.115378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.051 [2024-07-25 11:53:54.115433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.051 [2024-07-25 11:53:54.115449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.051 [2024-07-25 11:53:54.115504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.115522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.052 [2024-07-25 11:53:54.115578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.115594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:17.052 #59 NEW cov: 12338 ft: 15942 corp: 36/3395b lim: 120 exec/s: 59 rss: 74Mb L: 120/120 MS: 1 CrossOver- 00:07:17.052 [2024-07-25 11:53:54.165296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.165325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.052 [2024-07-25 11:53:54.165372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.165388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.052 [2024-07-25 11:53:54.165443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.165460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.052 [2024-07-25 11:53:54.165514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.165530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.052 #60 NEW cov: 12338 ft: 15957 corp: 37/3513b lim: 120 exec/s: 60 rss: 74Mb L: 118/120 MS: 1 ChangeBinInt- 00:07:17.052 [2024-07-25 11:53:54.205338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.205365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.052 [2024-07-25 11:53:54.205429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.205446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.052 [2024-07-25 11:53:54.205502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.205517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.052 [2024-07-25 11:53:54.205573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.052 [2024-07-25 11:53:54.205588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.052 #61 NEW cov: 12338 ft: 15966 corp: 38/3632b lim: 120 exec/s: 30 rss: 74Mb L: 119/120 MS: 1 InsertByte- 00:07:17.052 #61 DONE cov: 12338 ft: 15966 corp: 38/3632b lim: 120 exec/s: 30 rss: 74Mb 00:07:17.052 ###### Recommended dictionary. ###### 00:07:17.052 "\001\000\000\000\000\000\000H" # Uses: 1 00:07:17.052 ###### End of recommended dictionary. ###### 00:07:17.052 Done 61 runs in 2 second(s) 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:17.311 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.312 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.312 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.312 11:53:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:17.312 [2024-07-25 11:53:54.424223] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:17.312 [2024-07-25 11:53:54.424298] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908572 ] 00:07:17.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.571 [2024-07-25 11:53:54.618347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.571 [2024-07-25 11:53:54.690456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.571 [2024-07-25 11:53:54.750165] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.571 [2024-07-25 11:53:54.766470] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:17.571 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.571 INFO: Seed: 3447364917 00:07:17.571 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:17.571 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:17.571 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:17.571 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.571 #2 INITED exec/s: 0 rss: 64Mb 00:07:17.571 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.571 This may also happen if the target rejected all inputs we tried so far 00:07:17.571 [2024-07-25 11:53:54.811149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:17.571 [2024-07-25 11:53:54.811183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.571 [2024-07-25 11:53:54.811233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:17.571 [2024-07-25 11:53:54.811250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.571 [2024-07-25 11:53:54.811285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:17.571 [2024-07-25 11:53:54.811302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.088 NEW_FUNC[1/700]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:18.088 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:18.088 #12 NEW cov: 12046 ft: 12045 corp: 2/63b lim: 100 exec/s: 0 rss: 72Mb L: 62/62 MS: 5 ChangeByte-InsertByte-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:18.089 [2024-07-25 11:53:55.184328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.089 [2024-07-25 11:53:55.184369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.184451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.089 [2024-07-25 11:53:55.184464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.184549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.089 [2024-07-25 11:53:55.184567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.184663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:18.089 [2024-07-25 11:53:55.184682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.089 #14 NEW cov: 12159 ft: 13028 corp: 3/156b lim: 100 exec/s: 0 rss: 72Mb L: 93/93 MS: 2 CMP-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000\000"- 00:07:18.089 [2024-07-25 11:53:55.234484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.089 [2024-07-25 11:53:55.234510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.234585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.089 [2024-07-25 11:53:55.234600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.234679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.089 [2024-07-25 11:53:55.234697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.234790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:18.089 [2024-07-25 11:53:55.234807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.089 #15 NEW cov: 12165 ft: 13351 corp: 4/249b lim: 100 exec/s: 0 rss: 72Mb L: 93/93 MS: 1 CopyPart- 00:07:18.089 [2024-07-25 11:53:55.294327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.089 [2024-07-25 11:53:55.294356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.294412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.089 [2024-07-25 11:53:55.294428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.294507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.089 [2024-07-25 11:53:55.294524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.089 #16 NEW cov: 12250 ft: 13532 corp: 5/311b lim: 100 exec/s: 0 rss: 72Mb L: 62/93 MS: 1 ChangeByte- 00:07:18.089 [2024-07-25 11:53:55.364546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.089 [2024-07-25 11:53:55.364574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.364658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.089 [2024-07-25 11:53:55.364673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.089 [2024-07-25 11:53:55.364744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.089 [2024-07-25 11:53:55.364763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.348 #17 NEW cov: 12250 ft: 13595 corp: 6/374b lim: 100 exec/s: 0 rss: 72Mb L: 63/93 MS: 1 InsertByte- 00:07:18.348 [2024-07-25 11:53:55.434762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.348 [2024-07-25 11:53:55.434790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.434868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.348 [2024-07-25 11:53:55.434884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.434942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.348 [2024-07-25 11:53:55.434962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.348 #18 NEW cov: 12250 ft: 13658 corp: 7/436b lim: 100 exec/s: 0 rss: 72Mb L: 62/93 MS: 1 ChangeByte- 00:07:18.348 [2024-07-25 11:53:55.485033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.348 [2024-07-25 11:53:55.485061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.485138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.348 [2024-07-25 11:53:55.485157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.485215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.348 [2024-07-25 11:53:55.485233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.348 #19 NEW cov: 12250 ft: 13699 corp: 8/499b lim: 100 exec/s: 0 rss: 72Mb L: 63/93 MS: 1 CopyPart- 00:07:18.348 [2024-07-25 11:53:55.555073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.348 [2024-07-25 11:53:55.555112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.555216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.348 [2024-07-25 11:53:55.555240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.555348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.348 [2024-07-25 11:53:55.555373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.348 #20 NEW cov: 12250 ft: 13915 corp: 9/561b lim: 100 exec/s: 0 rss: 72Mb L: 62/93 MS: 1 ChangeBinInt- 00:07:18.348 [2024-07-25 11:53:55.645444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.348 [2024-07-25 11:53:55.645488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.645606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.348 [2024-07-25 11:53:55.645630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.348 [2024-07-25 11:53:55.645733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.348 [2024-07-25 11:53:55.645764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.607 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:18.607 #21 NEW cov: 12273 ft: 14066 corp: 10/624b lim: 100 exec/s: 0 rss: 72Mb L: 63/93 MS: 1 InsertByte- 00:07:18.607 [2024-07-25 11:53:55.735405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.608 [2024-07-25 11:53:55.735438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.735543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.608 [2024-07-25 11:53:55.735559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.608 #23 NEW cov: 12273 ft: 14386 corp: 11/667b lim: 100 exec/s: 0 rss: 72Mb L: 43/93 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:18.608 [2024-07-25 11:53:55.786138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.608 [2024-07-25 11:53:55.786169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.786236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.608 [2024-07-25 11:53:55.786251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.786303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.608 [2024-07-25 11:53:55.786321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.786416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:18.608 [2024-07-25 11:53:55.786435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.608 #24 NEW cov: 12273 ft: 14531 corp: 12/760b lim: 100 exec/s: 24 rss: 72Mb L: 93/93 MS: 1 CopyPart- 00:07:18.608 [2024-07-25 11:53:55.835941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.608 [2024-07-25 11:53:55.835968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.836046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.608 [2024-07-25 11:53:55.836061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.836123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.608 [2024-07-25 11:53:55.836140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.608 #25 NEW cov: 12273 ft: 14561 corp: 13/822b lim: 100 exec/s: 25 rss: 72Mb L: 62/93 MS: 1 CopyPart- 00:07:18.608 [2024-07-25 11:53:55.886363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.608 [2024-07-25 11:53:55.886390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.886454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.608 [2024-07-25 11:53:55.886471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.886523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.608 [2024-07-25 11:53:55.886538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.608 [2024-07-25 11:53:55.886627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:18.608 [2024-07-25 11:53:55.886645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.608 #26 NEW cov: 12273 ft: 14628 corp: 14/921b lim: 100 exec/s: 26 rss: 72Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:07:18.867 [2024-07-25 11:53:55.936328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.867 [2024-07-25 11:53:55.936358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:55.936423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.867 [2024-07-25 11:53:55.936440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:55.936490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.867 [2024-07-25 11:53:55.936506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.867 #27 NEW cov: 12273 ft: 14734 corp: 15/984b lim: 100 exec/s: 27 rss: 72Mb L: 63/99 MS: 1 ChangeBinInt- 00:07:18.867 [2024-07-25 11:53:55.997051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.867 [2024-07-25 11:53:55.997078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:55.997155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.867 [2024-07-25 11:53:55.997171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:55.997262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.867 [2024-07-25 11:53:55.997281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:55.997363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:18.867 [2024-07-25 11:53:55.997379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:55.997465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:18.867 [2024-07-25 11:53:55.997484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.867 #28 NEW cov: 12273 ft: 14785 corp: 16/1084b lim: 100 exec/s: 28 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:07:18.867 [2024-07-25 11:53:56.056774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.867 [2024-07-25 11:53:56.056803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:56.056856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.867 [2024-07-25 11:53:56.056871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:56.056922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.867 [2024-07-25 11:53:56.056937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.867 #29 NEW cov: 12273 ft: 14809 corp: 17/1146b lim: 100 exec/s: 29 rss: 72Mb L: 62/100 MS: 1 ChangeBinInt- 00:07:18.867 [2024-07-25 11:53:56.106924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.867 [2024-07-25 11:53:56.106954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:56.107012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.867 [2024-07-25 11:53:56.107027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:56.107112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:18.867 [2024-07-25 11:53:56.107132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.867 #30 NEW cov: 12273 ft: 14831 corp: 18/1209b lim: 100 exec/s: 30 rss: 72Mb L: 63/100 MS: 1 ChangeByte- 00:07:18.867 [2024-07-25 11:53:56.156896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:18.867 [2024-07-25 11:53:56.156922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.867 [2024-07-25 11:53:56.156973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:18.867 [2024-07-25 11:53:56.156990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.128 #31 NEW cov: 12273 ft: 14872 corp: 19/1265b lim: 100 exec/s: 31 rss: 72Mb L: 56/100 MS: 1 EraseBytes- 00:07:19.128 [2024-07-25 11:53:56.207269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.128 [2024-07-25 11:53:56.207296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.207365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.128 [2024-07-25 11:53:56.207382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.207456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.128 [2024-07-25 11:53:56.207472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.128 #32 NEW cov: 12273 ft: 14888 corp: 20/1327b lim: 100 exec/s: 32 rss: 72Mb L: 62/100 MS: 1 ChangeBinInt- 00:07:19.128 [2024-07-25 11:53:56.257379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.128 [2024-07-25 11:53:56.257405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.257464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.128 [2024-07-25 11:53:56.257480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.257551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.128 [2024-07-25 11:53:56.257569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.128 #33 NEW cov: 12273 ft: 14919 corp: 21/1389b lim: 100 exec/s: 33 rss: 72Mb L: 62/100 MS: 1 ChangeBit- 00:07:19.128 [2024-07-25 11:53:56.307577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.128 [2024-07-25 11:53:56.307604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.307664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.128 [2024-07-25 11:53:56.307681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.307744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.128 [2024-07-25 11:53:56.307760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.128 #34 NEW cov: 12273 ft: 14932 corp: 22/1459b lim: 100 exec/s: 34 rss: 73Mb L: 70/100 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:19.128 [2024-07-25 11:53:56.358037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.128 [2024-07-25 11:53:56.358065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.358130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.128 [2024-07-25 11:53:56.358149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.358227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.128 [2024-07-25 11:53:56.358244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.358329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:19.128 [2024-07-25 11:53:56.358346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.128 #35 NEW cov: 12273 ft: 14958 corp: 23/1552b lim: 100 exec/s: 35 rss: 73Mb L: 93/100 MS: 1 ChangeBit- 00:07:19.128 [2024-07-25 11:53:56.417976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.128 [2024-07-25 11:53:56.418003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.418067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.128 [2024-07-25 11:53:56.418081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.128 [2024-07-25 11:53:56.418137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.128 [2024-07-25 11:53:56.418156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.387 #36 NEW cov: 12273 ft: 14970 corp: 24/1615b lim: 100 exec/s: 36 rss: 73Mb L: 63/100 MS: 1 ChangeBit- 00:07:19.387 [2024-07-25 11:53:56.477888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.387 [2024-07-25 11:53:56.477915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.387 [2024-07-25 11:53:56.477964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.387 [2024-07-25 11:53:56.477981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.387 #37 NEW cov: 12273 ft: 15005 corp: 25/1671b lim: 100 exec/s: 37 rss: 73Mb L: 56/100 MS: 1 ChangeByte- 00:07:19.387 [2024-07-25 11:53:56.538400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.388 [2024-07-25 11:53:56.538429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.388 [2024-07-25 11:53:56.538499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.388 [2024-07-25 11:53:56.538516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.388 [2024-07-25 11:53:56.538595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.388 [2024-07-25 11:53:56.538614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.388 #38 NEW cov: 12273 ft: 15027 corp: 26/1749b lim: 100 exec/s: 38 rss: 73Mb L: 78/100 MS: 1 CopyPart- 00:07:19.388 [2024-07-25 11:53:56.588652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.388 [2024-07-25 11:53:56.588679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.388 [2024-07-25 11:53:56.588753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.388 [2024-07-25 11:53:56.588780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.388 [2024-07-25 11:53:56.588844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.388 [2024-07-25 11:53:56.588863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.388 #39 NEW cov: 12273 ft: 15095 corp: 27/1827b lim: 100 exec/s: 39 rss: 73Mb L: 78/100 MS: 1 EraseBytes- 00:07:19.388 [2024-07-25 11:53:56.648538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.388 [2024-07-25 11:53:56.648565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.388 [2024-07-25 11:53:56.648628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.388 [2024-07-25 11:53:56.648644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.388 #40 NEW cov: 12273 ft: 15109 corp: 28/1871b lim: 100 exec/s: 40 rss: 73Mb L: 44/100 MS: 1 InsertByte- 00:07:19.646 [2024-07-25 11:53:56.708889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.646 [2024-07-25 11:53:56.708919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.647 [2024-07-25 11:53:56.708992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.647 [2024-07-25 11:53:56.709011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.647 #41 NEW cov: 12273 ft: 15140 corp: 29/1916b lim: 100 exec/s: 41 rss: 73Mb L: 45/100 MS: 1 InsertByte- 00:07:19.647 [2024-07-25 11:53:56.779323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:19.647 [2024-07-25 11:53:56.779354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.647 [2024-07-25 11:53:56.779430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:19.647 [2024-07-25 11:53:56.779451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.647 [2024-07-25 11:53:56.779541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:19.647 [2024-07-25 11:53:56.779562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.647 #42 NEW cov: 12273 ft: 15174 corp: 30/1978b lim: 100 exec/s: 21 rss: 73Mb L: 62/100 MS: 1 ShuffleBytes- 00:07:19.647 #42 DONE cov: 12273 ft: 15174 corp: 30/1978b lim: 100 exec/s: 21 rss: 73Mb 00:07:19.647 ###### Recommended dictionary. ###### 00:07:19.647 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:19.647 ###### End of recommended dictionary. ###### 00:07:19.647 Done 42 runs in 2 second(s) 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.647 11:53:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:19.906 [2024-07-25 11:53:56.973806] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:19.906 [2024-07-25 11:53:56.973883] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908886 ] 00:07:19.906 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.165 [2024-07-25 11:53:57.274034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.165 [2024-07-25 11:53:57.369175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.165 [2024-07-25 11:53:57.428824] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.165 [2024-07-25 11:53:57.445128] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:20.165 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.165 INFO: Seed: 1831394459 00:07:20.425 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:20.425 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:20.425 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:20.425 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.425 #2 INITED exec/s: 0 rss: 65Mb 00:07:20.425 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.425 This may also happen if the target rejected all inputs we tried so far 00:07:20.425 [2024-07-25 11:53:57.510438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:20.425 [2024-07-25 11:53:57.510475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.425 [2024-07-25 11:53:57.510543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:20.425 [2024-07-25 11:53:57.510559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.425 [2024-07-25 11:53:57.510610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49419 00:07:20.425 [2024-07-25 11:53:57.510625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.684 NEW_FUNC[1/700]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:20.684 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.684 #5 NEW cov: 12006 ft: 12006 corp: 2/31b lim: 50 exec/s: 0 rss: 71Mb L: 30/30 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:20.684 [2024-07-25 11:53:57.851596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:20.684 [2024-07-25 11:53:57.851689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.684 [2024-07-25 11:53:57.851818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307780206215617 len:11566 00:07:20.684 [2024-07-25 11:53:57.851849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.684 [2024-07-25 11:53:57.851925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653355256032705 len:49602 00:07:20.684 [2024-07-25 11:53:57.851953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.684 #6 NEW cov: 12136 ft: 12792 corp: 3/70b lim: 50 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:20.684 [2024-07-25 11:53:57.911341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:20.684 [2024-07-25 11:53:57.911372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.684 [2024-07-25 11:53:57.911404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:20.684 [2024-07-25 11:53:57.911419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.684 [2024-07-25 11:53:57.911467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:20.684 [2024-07-25 11:53:57.911481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.684 #12 NEW cov: 12142 ft: 13090 corp: 4/101b lim: 50 exec/s: 0 rss: 72Mb L: 31/39 MS: 1 InsertByte- 00:07:20.684 [2024-07-25 11:53:57.951420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:20.684 [2024-07-25 11:53:57.951449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.684 [2024-07-25 11:53:57.951481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:20.684 [2024-07-25 11:53:57.951495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.684 [2024-07-25 11:53:57.951548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:20.684 [2024-07-25 11:53:57.951563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.943 #13 NEW cov: 12227 ft: 13282 corp: 5/132b lim: 50 exec/s: 0 rss: 72Mb L: 31/39 MS: 1 ShuffleBytes- 00:07:20.943 [2024-07-25 11:53:58.001342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:20.943 [2024-07-25 11:53:58.001369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.943 #14 NEW cov: 12227 ft: 13755 corp: 6/145b lim: 50 exec/s: 0 rss: 72Mb L: 13/39 MS: 1 InsertRepeatedBytes- 00:07:20.943 [2024-07-25 11:53:58.041645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:20.943 [2024-07-25 11:53:58.041674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.943 [2024-07-25 11:53:58.041724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:20.943 [2024-07-25 11:53:58.041747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.943 [2024-07-25 11:53:58.041800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653624036770241 len:65474 00:07:20.943 [2024-07-25 11:53:58.041816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.943 #15 NEW cov: 12227 ft: 13801 corp: 7/176b lim: 50 exec/s: 0 rss: 72Mb L: 31/39 MS: 1 CrossOver- 00:07:20.943 [2024-07-25 11:53:58.081649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:20.943 [2024-07-25 11:53:58.081677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.943 [2024-07-25 11:53:58.081742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:20.943 [2024-07-25 11:53:58.081757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.944 #16 NEW cov: 12227 ft: 14047 corp: 8/197b lim: 50 exec/s: 0 rss: 72Mb L: 21/39 MS: 1 EraseBytes- 00:07:20.944 [2024-07-25 11:53:58.131779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:20.944 [2024-07-25 11:53:58.131808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.944 [2024-07-25 11:53:58.131853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307780206215617 len:11714 00:07:20.944 [2024-07-25 11:53:58.131870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.944 #17 NEW cov: 12227 ft: 14077 corp: 9/220b lim: 50 exec/s: 0 rss: 72Mb L: 23/39 MS: 1 EraseBytes- 00:07:20.944 [2024-07-25 11:53:58.182012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:20.944 [2024-07-25 11:53:58.182040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.944 [2024-07-25 11:53:58.182075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:20.944 [2024-07-25 11:53:58.182091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.944 [2024-07-25 11:53:58.182144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:20.944 [2024-07-25 11:53:58.182158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.944 #18 NEW cov: 12227 ft: 14116 corp: 10/251b lim: 50 exec/s: 0 rss: 72Mb L: 31/39 MS: 1 ShuffleBytes- 00:07:20.944 [2024-07-25 11:53:58.222123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:20.944 [2024-07-25 11:53:58.222150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.944 [2024-07-25 11:53:58.222202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961652928252068289 len:49602 00:07:20.944 [2024-07-25 11:53:58.222219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.944 [2024-07-25 11:53:58.222270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49419 00:07:20.944 [2024-07-25 11:53:58.222286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.944 #19 NEW cov: 12227 ft: 14197 corp: 11/281b lim: 50 exec/s: 0 rss: 72Mb L: 30/39 MS: 1 ChangeByte- 00:07:21.203 [2024-07-25 11:53:58.262202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890935180374963 len:46004 00:07:21.203 [2024-07-25 11:53:58.262231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.203 [2024-07-25 11:53:58.262287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12948890938015724467 len:46004 00:07:21.203 [2024-07-25 11:53:58.262303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.203 [2024-07-25 11:53:58.262354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12948890938015724467 len:46004 00:07:21.203 [2024-07-25 11:53:58.262369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.203 #20 NEW cov: 12227 ft: 14244 corp: 12/311b lim: 50 exec/s: 0 rss: 72Mb L: 30/39 MS: 1 InsertRepeatedBytes- 00:07:21.203 [2024-07-25 11:53:58.302408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.203 [2024-07-25 11:53:58.302436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.203 [2024-07-25 11:53:58.302477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307780206215617 len:11714 00:07:21.203 [2024-07-25 11:53:58.302493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.203 [2024-07-25 11:53:58.302542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.203 [2024-07-25 11:53:58.302558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.203 [2024-07-25 11:53:58.302610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3255307780206215469 len:49602 00:07:21.204 [2024-07-25 11:53:58.302626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.204 #21 NEW cov: 12227 ft: 14491 corp: 13/353b lim: 50 exec/s: 0 rss: 72Mb L: 42/42 MS: 1 CopyPart- 00:07:21.204 [2024-07-25 11:53:58.352436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:21.204 [2024-07-25 11:53:58.352470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.204 [2024-07-25 11:53:58.352528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:21.204 [2024-07-25 11:53:58.352544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.204 [2024-07-25 11:53:58.352597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.204 [2024-07-25 11:53:58.352612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.204 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:21.204 #22 NEW cov: 12250 ft: 14534 corp: 14/384b lim: 50 exec/s: 0 rss: 72Mb L: 31/42 MS: 1 ShuffleBytes- 00:07:21.204 [2024-07-25 11:53:58.412760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.204 [2024-07-25 11:53:58.412788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.204 [2024-07-25 11:53:58.412835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307780206215617 len:11714 00:07:21.204 [2024-07-25 11:53:58.412850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.204 [2024-07-25 11:53:58.412900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.204 [2024-07-25 11:53:58.412931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.204 [2024-07-25 11:53:58.412983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3255308415861375277 len:49602 00:07:21.204 [2024-07-25 11:53:58.412997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.204 #23 NEW cov: 12250 ft: 14613 corp: 15/426b lim: 50 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 CrossOver- 00:07:21.204 [2024-07-25 11:53:58.462668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.204 [2024-07-25 11:53:58.462695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.204 [2024-07-25 11:53:58.462755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:21.204 [2024-07-25 11:53:58.462772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.204 #24 NEW cov: 12250 ft: 14631 corp: 16/449b lim: 50 exec/s: 0 rss: 73Mb L: 23/42 MS: 1 EraseBytes- 00:07:21.204 [2024-07-25 11:53:58.502686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13054187465626091520 len:56334 00:07:21.204 [2024-07-25 11:53:58.502715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.463 #27 NEW cov: 12250 ft: 14672 corp: 17/466b lim: 50 exec/s: 27 rss: 73Mb L: 17/42 MS: 3 CrossOver-CMP-CMP- DE: "\000\000\000\000\000\000\000\000"-"\265)\312V\334\015\032\000"- 00:07:21.463 [2024-07-25 11:53:58.542989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:21.463 [2024-07-25 11:53:58.543016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.543053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:21.463 [2024-07-25 11:53:58.543068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.543119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.463 [2024-07-25 11:53:58.543134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.463 #28 NEW cov: 12250 ft: 14695 corp: 18/498b lim: 50 exec/s: 28 rss: 73Mb L: 32/42 MS: 1 InsertByte- 00:07:21.463 [2024-07-25 11:53:58.583201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.463 [2024-07-25 11:53:58.583229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.583263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307780206215617 len:47803 00:07:21.463 [2024-07-25 11:53:58.583278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.583330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357630860737 len:49602 00:07:21.463 [2024-07-25 11:53:58.583345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.583398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13961652722093638081 len:11566 00:07:21.463 [2024-07-25 11:53:58.583412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.463 #29 NEW cov: 12250 ft: 14747 corp: 19/544b lim: 50 exec/s: 29 rss: 73Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:07:21.463 [2024-07-25 11:53:58.633244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:21.463 [2024-07-25 11:53:58.633271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.633309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797897 len:49602 00:07:21.463 [2024-07-25 11:53:58.633324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.633376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.463 [2024-07-25 11:53:58.633390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.463 #30 NEW cov: 12250 ft: 14757 corp: 20/575b lim: 50 exec/s: 30 rss: 73Mb L: 31/46 MS: 1 ChangeBit- 00:07:21.463 [2024-07-25 11:53:58.673335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890935180374963 len:46004 00:07:21.463 [2024-07-25 11:53:58.673362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.673400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5915882579226243982 len:3355 00:07:21.463 [2024-07-25 11:53:58.673415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.673467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12948890935012602803 len:46004 00:07:21.463 [2024-07-25 11:53:58.673504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.463 #31 NEW cov: 12250 ft: 14796 corp: 21/605b lim: 50 exec/s: 31 rss: 73Mb L: 30/46 MS: 1 CMP- DE: "\216R\031o\334\015\032\000"- 00:07:21.463 [2024-07-25 11:53:58.723505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958344092913091009 len:65536 00:07:21.463 [2024-07-25 11:53:58.723532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.723565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653358793048513 len:49610 00:07:21.463 [2024-07-25 11:53:58.723579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.463 [2024-07-25 11:53:58.723630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.463 [2024-07-25 11:53:58.723645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.463 #32 NEW cov: 12250 ft: 14815 corp: 22/642b lim: 50 exec/s: 32 rss: 73Mb L: 37/46 MS: 1 InsertRepeatedBytes- 00:07:21.723 [2024-07-25 11:53:58.773840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.723 [2024-07-25 11:53:58.773868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.773917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:21.723 [2024-07-25 11:53:58.773931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.773980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.723 [2024-07-25 11:53:58.773995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.774045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3255307780206177581 len:11566 00:07:21.723 [2024-07-25 11:53:58.774060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.774112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:13961653355265769921 len:49419 00:07:21.723 [2024-07-25 11:53:58.774126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.723 #33 NEW cov: 12250 ft: 14891 corp: 23/692b lim: 50 exec/s: 33 rss: 73Mb L: 50/50 MS: 1 CrossOver- 00:07:21.723 [2024-07-25 11:53:58.813669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1457261850 len:1 00:07:21.723 [2024-07-25 11:53:58.813696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.813729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14580082792051291433 len:6657 00:07:21.723 [2024-07-25 11:53:58.813750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.723 #34 NEW cov: 12250 ft: 14924 corp: 24/717b lim: 50 exec/s: 34 rss: 73Mb L: 25/50 MS: 1 CopyPart- 00:07:21.723 [2024-07-25 11:53:58.863910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958344092913091009 len:65536 00:07:21.723 [2024-07-25 11:53:58.863938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.863972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653358793012929 len:49610 00:07:21.723 [2024-07-25 11:53:58.863988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.864039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.723 [2024-07-25 11:53:58.864054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.723 #35 NEW cov: 12250 ft: 14933 corp: 25/754b lim: 50 exec/s: 35 rss: 73Mb L: 37/50 MS: 1 ChangeByte- 00:07:21.723 [2024-07-25 11:53:58.913902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.723 [2024-07-25 11:53:58.913930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.913997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307780197827009 len:11714 00:07:21.723 [2024-07-25 11:53:58.914013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.723 #36 NEW cov: 12250 ft: 14946 corp: 26/777b lim: 50 exec/s: 36 rss: 73Mb L: 23/50 MS: 1 ChangeBit- 00:07:21.723 [2024-07-25 11:53:58.953928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:21.723 [2024-07-25 11:53:58.953955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.723 #37 NEW cov: 12250 ft: 14963 corp: 27/788b lim: 50 exec/s: 37 rss: 73Mb L: 11/50 MS: 1 InsertRepeatedBytes- 00:07:21.723 [2024-07-25 11:53:58.994251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890935180374963 len:46004 00:07:21.723 [2024-07-25 11:53:58.994278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.994330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5915882579226243982 len:3355 00:07:21.723 [2024-07-25 11:53:58.994345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.723 [2024-07-25 11:53:58.994396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12948890935012568755 len:46004 00:07:21.723 [2024-07-25 11:53:58.994411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.983 #38 NEW cov: 12250 ft: 14981 corp: 28/818b lim: 50 exec/s: 38 rss: 73Mb L: 30/50 MS: 1 ChangeByte- 00:07:21.983 [2024-07-25 11:53:59.044486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.983 [2024-07-25 11:53:59.044513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.044560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961652722093638081 len:11566 00:07:21.983 [2024-07-25 11:53:59.044576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.044624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653355256070593 len:49602 00:07:21.983 [2024-07-25 11:53:59.044640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.044690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13961489994372727233 len:11566 00:07:21.983 [2024-07-25 11:53:59.044706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.983 #39 NEW cov: 12250 ft: 14989 corp: 29/863b lim: 50 exec/s: 39 rss: 73Mb L: 45/50 MS: 1 CrossOver- 00:07:21.983 [2024-07-25 11:53:59.084515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:21.983 [2024-07-25 11:53:59.084542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.084576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:21.983 [2024-07-25 11:53:59.084591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.084641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:07:21.983 [2024-07-25 11:53:59.084657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.983 #40 NEW cov: 12250 ft: 14997 corp: 30/894b lim: 50 exec/s: 40 rss: 73Mb L: 31/50 MS: 1 ShuffleBytes- 00:07:21.983 [2024-07-25 11:53:59.134751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.983 [2024-07-25 11:53:59.134777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.134826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961652722093638081 len:11566 00:07:21.983 [2024-07-25 11:53:59.134840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.134889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653355256070593 len:49536 00:07:21.983 [2024-07-25 11:53:59.134904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.134953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13961652722093638081 len:11566 00:07:21.983 [2024-07-25 11:53:59.134968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.983 #41 NEW cov: 12250 ft: 15011 corp: 31/940b lim: 50 exec/s: 41 rss: 73Mb L: 46/50 MS: 1 InsertByte- 00:07:21.983 [2024-07-25 11:53:59.184808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:21.983 [2024-07-25 11:53:59.184842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.184895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:21.983 [2024-07-25 11:53:59.184912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.184964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961474407936410049 len:1 00:07:21.983 [2024-07-25 11:53:59.184980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.983 #42 NEW cov: 12250 ft: 15017 corp: 32/971b lim: 50 exec/s: 42 rss: 73Mb L: 31/50 MS: 1 ChangeBinInt- 00:07:21.983 [2024-07-25 11:53:59.235000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:21.983 [2024-07-25 11:53:59.235032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.235094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748799937 len:49602 00:07:21.983 [2024-07-25 11:53:59.235109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.235159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49419 00:07:21.983 [2024-07-25 11:53:59.235174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.983 #43 NEW cov: 12250 ft: 15047 corp: 33/1001b lim: 50 exec/s: 43 rss: 74Mb L: 30/50 MS: 1 ChangeBit- 00:07:21.983 [2024-07-25 11:53:59.274963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1457261850 len:1 00:07:21.983 [2024-07-25 11:53:59.274992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.983 [2024-07-25 11:53:59.275048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14555728613189681333 len:6657 00:07:21.983 [2024-07-25 11:53:59.275064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.243 #44 NEW cov: 12250 ft: 15076 corp: 34/1026b lim: 50 exec/s: 44 rss: 74Mb L: 25/50 MS: 1 ShuffleBytes- 00:07:22.243 [2024-07-25 11:53:59.325159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890935180374963 len:46004 00:07:22.243 [2024-07-25 11:53:59.325188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.325223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5915882579226243982 len:3355 00:07:22.243 [2024-07-25 11:53:59.325238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.325288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12948890935012568755 len:46004 00:07:22.243 [2024-07-25 11:53:59.325302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.243 #45 NEW cov: 12250 ft: 15090 corp: 35/1056b lim: 50 exec/s: 45 rss: 74Mb L: 30/50 MS: 1 ShuffleBytes- 00:07:22.243 [2024-07-25 11:53:59.375291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13958275656904196545 len:49602 00:07:22.243 [2024-07-25 11:53:59.375318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.375357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:07:22.243 [2024-07-25 11:53:59.375373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.375424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653624036770242 len:65474 00:07:22.243 [2024-07-25 11:53:59.375440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.243 #46 NEW cov: 12250 ft: 15130 corp: 36/1087b lim: 50 exec/s: 46 rss: 74Mb L: 31/50 MS: 1 ChangeByte- 00:07:22.243 [2024-07-25 11:53:59.415409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624726465 len:49602 00:07:22.243 [2024-07-25 11:53:59.415437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.415489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748799937 len:49602 00:07:22.243 [2024-07-25 11:53:59.415505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.415555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49419 00:07:22.243 [2024-07-25 11:53:59.415571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.243 #47 NEW cov: 12250 ft: 15133 corp: 37/1117b lim: 50 exec/s: 47 rss: 74Mb L: 30/50 MS: 1 ChangeBit- 00:07:22.243 [2024-07-25 11:53:59.465682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356624724417 len:49602 00:07:22.243 [2024-07-25 11:53:59.465709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.465761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961652722093638081 len:11566 00:07:22.243 [2024-07-25 11:53:59.465776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.465821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653355256070593 len:49536 00:07:22.243 [2024-07-25 11:53:59.465835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.243 [2024-07-25 11:53:59.465884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13961652722093638081 len:11566 00:07:22.243 [2024-07-25 11:53:59.465899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.243 #48 NEW cov: 12250 ft: 15147 corp: 38/1163b lim: 50 exec/s: 24 rss: 74Mb L: 46/50 MS: 1 CrossOver- 00:07:22.243 #48 DONE cov: 12250 ft: 15147 corp: 38/1163b lim: 50 exec/s: 24 rss: 74Mb 00:07:22.243 ###### Recommended dictionary. ###### 00:07:22.243 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:22.243 "\265)\312V\334\015\032\000" # Uses: 0 00:07:22.243 "\216R\031o\334\015\032\000" # Uses: 0 00:07:22.243 ###### End of recommended dictionary. ###### 00:07:22.243 Done 48 runs in 2 second(s) 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.503 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.504 11:53:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:22.504 [2024-07-25 11:53:59.685630] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:22.504 [2024-07-25 11:53:59.685720] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909229 ] 00:07:22.504 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.762 [2024-07-25 11:53:59.896574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.762 [2024-07-25 11:53:59.966362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.762 [2024-07-25 11:54:00.026200] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.763 [2024-07-25 11:54:00.042523] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:22.763 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.763 INFO: Seed: 134439180 00:07:23.021 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:23.021 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:23.021 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:23.021 INFO: A corpus is not provided, starting from an empty corpus 00:07:23.021 #2 INITED exec/s: 0 rss: 65Mb 00:07:23.021 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:23.021 This may also happen if the target rejected all inputs we tried so far 00:07:23.021 [2024-07-25 11:54:00.108080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.021 [2024-07-25 11:54:00.108116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.021 [2024-07-25 11:54:00.108166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.021 [2024-07-25 11:54:00.108183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.021 [2024-07-25 11:54:00.108236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.021 [2024-07-25 11:54:00.108252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.021 [2024-07-25 11:54:00.108305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:23.021 [2024-07-25 11:54:00.108320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.279 NEW_FUNC[1/702]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:23.279 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.279 #3 NEW cov: 12064 ft: 12023 corp: 2/89b lim: 90 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:23.279 [2024-07-25 11:54:00.448818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.279 [2024-07-25 11:54:00.448879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.279 [2024-07-25 11:54:00.448961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.279 [2024-07-25 11:54:00.448984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.279 [2024-07-25 11:54:00.449047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.279 [2024-07-25 11:54:00.449069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.279 #4 NEW cov: 12194 ft: 13122 corp: 3/159b lim: 90 exec/s: 0 rss: 72Mb L: 70/88 MS: 1 CrossOver- 00:07:23.280 [2024-07-25 11:54:00.498819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.280 [2024-07-25 11:54:00.498852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.280 [2024-07-25 11:54:00.498895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.280 [2024-07-25 11:54:00.498910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.280 [2024-07-25 11:54:00.498964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.280 [2024-07-25 11:54:00.498981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.280 #5 NEW cov: 12200 ft: 13356 corp: 4/229b lim: 90 exec/s: 0 rss: 73Mb L: 70/88 MS: 1 CopyPart- 00:07:23.280 [2024-07-25 11:54:00.548939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.280 [2024-07-25 11:54:00.548970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.280 [2024-07-25 11:54:00.549012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.280 [2024-07-25 11:54:00.549030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.280 [2024-07-25 11:54:00.549085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.280 [2024-07-25 11:54:00.549101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.538 #16 NEW cov: 12285 ft: 13545 corp: 5/299b lim: 90 exec/s: 0 rss: 73Mb L: 70/88 MS: 1 ChangeBit- 00:07:23.538 [2024-07-25 11:54:00.609144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.538 [2024-07-25 11:54:00.609174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.609232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.538 [2024-07-25 11:54:00.609249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.609304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.538 [2024-07-25 11:54:00.609320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.538 #17 NEW cov: 12285 ft: 13655 corp: 6/369b lim: 90 exec/s: 0 rss: 73Mb L: 70/88 MS: 1 ChangeBit- 00:07:23.538 [2024-07-25 11:54:00.649239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.538 [2024-07-25 11:54:00.649269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.649318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.538 [2024-07-25 11:54:00.649338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.649394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.538 [2024-07-25 11:54:00.649412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.538 #18 NEW cov: 12285 ft: 13734 corp: 7/440b lim: 90 exec/s: 0 rss: 73Mb L: 71/88 MS: 1 InsertByte- 00:07:23.538 [2024-07-25 11:54:00.699367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.538 [2024-07-25 11:54:00.699398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.699448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.538 [2024-07-25 11:54:00.699465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.699520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.538 [2024-07-25 11:54:00.699537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.538 #19 NEW cov: 12285 ft: 13804 corp: 8/511b lim: 90 exec/s: 0 rss: 73Mb L: 71/88 MS: 1 ChangeByte- 00:07:23.538 [2024-07-25 11:54:00.759543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.538 [2024-07-25 11:54:00.759573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.759612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.538 [2024-07-25 11:54:00.759629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.759685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.538 [2024-07-25 11:54:00.759701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.538 #23 NEW cov: 12285 ft: 13900 corp: 9/580b lim: 90 exec/s: 0 rss: 73Mb L: 69/88 MS: 4 ChangeBinInt-ShuffleBytes-InsertRepeatedBytes-CrossOver- 00:07:23.538 [2024-07-25 11:54:00.799590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.538 [2024-07-25 11:54:00.799620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.799657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.538 [2024-07-25 11:54:00.799674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.538 [2024-07-25 11:54:00.799728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.539 [2024-07-25 11:54:00.799750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.539 #24 NEW cov: 12285 ft: 13923 corp: 10/650b lim: 90 exec/s: 0 rss: 73Mb L: 70/88 MS: 1 ChangeBit- 00:07:23.797 [2024-07-25 11:54:00.849478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.797 [2024-07-25 11:54:00.849508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.797 #26 NEW cov: 12285 ft: 14778 corp: 11/672b lim: 90 exec/s: 0 rss: 73Mb L: 22/88 MS: 2 ShuffleBytes-CrossOver- 00:07:23.797 [2024-07-25 11:54:00.889824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.797 [2024-07-25 11:54:00.889857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.889899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.797 [2024-07-25 11:54:00.889916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.889972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.797 [2024-07-25 11:54:00.890004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.797 #27 NEW cov: 12285 ft: 14834 corp: 12/742b lim: 90 exec/s: 0 rss: 73Mb L: 70/88 MS: 1 ShuffleBytes- 00:07:23.797 [2024-07-25 11:54:00.940165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.797 [2024-07-25 11:54:00.940196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.940233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.797 [2024-07-25 11:54:00.940249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.940304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.797 [2024-07-25 11:54:00.940319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.940374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:23.797 [2024-07-25 11:54:00.940390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.797 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:23.797 #28 NEW cov: 12308 ft: 14867 corp: 13/826b lim: 90 exec/s: 0 rss: 74Mb L: 84/88 MS: 1 CrossOver- 00:07:23.797 [2024-07-25 11:54:00.990308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.797 [2024-07-25 11:54:00.990340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.990378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.797 [2024-07-25 11:54:00.990394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.990448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.797 [2024-07-25 11:54:00.990464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:00.990519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:23.797 [2024-07-25 11:54:00.990536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.797 #29 NEW cov: 12308 ft: 14885 corp: 14/902b lim: 90 exec/s: 0 rss: 74Mb L: 76/88 MS: 1 InsertRepeatedBytes- 00:07:23.797 [2024-07-25 11:54:01.030299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.797 [2024-07-25 11:54:01.030329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:01.030384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.797 [2024-07-25 11:54:01.030404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:01.030464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.797 [2024-07-25 11:54:01.030481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.797 #30 NEW cov: 12308 ft: 14900 corp: 15/972b lim: 90 exec/s: 0 rss: 74Mb L: 70/88 MS: 1 ChangeByte- 00:07:23.797 [2024-07-25 11:54:01.070350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:23.797 [2024-07-25 11:54:01.070380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:01.070433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:23.797 [2024-07-25 11:54:01.070450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.797 [2024-07-25 11:54:01.070506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:23.797 [2024-07-25 11:54:01.070523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.797 #31 NEW cov: 12308 ft: 14908 corp: 16/1042b lim: 90 exec/s: 31 rss: 74Mb L: 70/88 MS: 1 ChangeBinInt- 00:07:24.055 [2024-07-25 11:54:01.110488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.055 [2024-07-25 11:54:01.110517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.110554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.055 [2024-07-25 11:54:01.110570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.110624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.055 [2024-07-25 11:54:01.110639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.055 #32 NEW cov: 12308 ft: 14922 corp: 17/1111b lim: 90 exec/s: 32 rss: 74Mb L: 69/88 MS: 1 ChangeBit- 00:07:24.055 [2024-07-25 11:54:01.160446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.055 [2024-07-25 11:54:01.160476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.160527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.055 [2024-07-25 11:54:01.160542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.055 #33 NEW cov: 12308 ft: 15243 corp: 18/1156b lim: 90 exec/s: 33 rss: 74Mb L: 45/88 MS: 1 CrossOver- 00:07:24.055 [2024-07-25 11:54:01.200712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.055 [2024-07-25 11:54:01.200746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.200787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.055 [2024-07-25 11:54:01.200802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.200857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.055 [2024-07-25 11:54:01.200872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.055 #34 NEW cov: 12308 ft: 15272 corp: 19/1225b lim: 90 exec/s: 34 rss: 74Mb L: 69/88 MS: 1 ChangeBit- 00:07:24.055 [2024-07-25 11:54:01.240675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.055 [2024-07-25 11:54:01.240705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.240746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.055 [2024-07-25 11:54:01.240760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.055 #35 NEW cov: 12308 ft: 15295 corp: 20/1275b lim: 90 exec/s: 35 rss: 74Mb L: 50/88 MS: 1 CrossOver- 00:07:24.055 [2024-07-25 11:54:01.281111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.055 [2024-07-25 11:54:01.281140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.281184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.055 [2024-07-25 11:54:01.281199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.281252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.055 [2024-07-25 11:54:01.281268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.281322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.055 [2024-07-25 11:54:01.281339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.055 #36 NEW cov: 12308 ft: 15317 corp: 21/1364b lim: 90 exec/s: 36 rss: 74Mb L: 89/89 MS: 1 InsertByte- 00:07:24.055 [2024-07-25 11:54:01.331240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.055 [2024-07-25 11:54:01.331269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.331311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.055 [2024-07-25 11:54:01.331327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.331380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.055 [2024-07-25 11:54:01.331397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.055 [2024-07-25 11:54:01.331451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.055 [2024-07-25 11:54:01.331466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.055 #37 NEW cov: 12308 ft: 15340 corp: 22/1452b lim: 90 exec/s: 37 rss: 74Mb L: 88/89 MS: 1 ChangeByte- 00:07:24.314 [2024-07-25 11:54:01.371364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.314 [2024-07-25 11:54:01.371393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.371435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.314 [2024-07-25 11:54:01.371453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.371505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.314 [2024-07-25 11:54:01.371522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.371580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.314 [2024-07-25 11:54:01.371596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.314 #38 NEW cov: 12308 ft: 15352 corp: 23/1538b lim: 90 exec/s: 38 rss: 74Mb L: 86/89 MS: 1 CrossOver- 00:07:24.314 [2024-07-25 11:54:01.421338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.314 [2024-07-25 11:54:01.421366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.421409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.314 [2024-07-25 11:54:01.421425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.421497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.314 [2024-07-25 11:54:01.421515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.314 #39 NEW cov: 12308 ft: 15360 corp: 24/1608b lim: 90 exec/s: 39 rss: 74Mb L: 70/89 MS: 1 CMP- DE: "\000\032\015\342\216R\224D"- 00:07:24.314 [2024-07-25 11:54:01.471504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.314 [2024-07-25 11:54:01.471532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.471568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.314 [2024-07-25 11:54:01.471584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.471639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.314 [2024-07-25 11:54:01.471655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.314 #40 NEW cov: 12308 ft: 15443 corp: 25/1677b lim: 90 exec/s: 40 rss: 74Mb L: 69/89 MS: 1 CrossOver- 00:07:24.314 [2024-07-25 11:54:01.521615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.314 [2024-07-25 11:54:01.521644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.521698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.314 [2024-07-25 11:54:01.521716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.521772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.314 [2024-07-25 11:54:01.521789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.314 #41 NEW cov: 12308 ft: 15456 corp: 26/1743b lim: 90 exec/s: 41 rss: 74Mb L: 66/89 MS: 1 EraseBytes- 00:07:24.314 [2024-07-25 11:54:01.571942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.314 [2024-07-25 11:54:01.571970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.572023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.314 [2024-07-25 11:54:01.572038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.572092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.314 [2024-07-25 11:54:01.572111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.572163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.314 [2024-07-25 11:54:01.572180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.314 #42 NEW cov: 12308 ft: 15466 corp: 27/1831b lim: 90 exec/s: 42 rss: 74Mb L: 88/89 MS: 1 CopyPart- 00:07:24.314 [2024-07-25 11:54:01.612027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.314 [2024-07-25 11:54:01.612056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.612094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.314 [2024-07-25 11:54:01.612111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.612165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.314 [2024-07-25 11:54:01.612181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.314 [2024-07-25 11:54:01.612236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.314 [2024-07-25 11:54:01.612251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.573 #43 NEW cov: 12308 ft: 15475 corp: 28/1919b lim: 90 exec/s: 43 rss: 74Mb L: 88/89 MS: 1 ChangeBinInt- 00:07:24.573 [2024-07-25 11:54:01.662032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.573 [2024-07-25 11:54:01.662065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.662103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.573 [2024-07-25 11:54:01.662122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.662178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.573 [2024-07-25 11:54:01.662194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.573 #44 NEW cov: 12308 ft: 15484 corp: 29/1985b lim: 90 exec/s: 44 rss: 74Mb L: 66/89 MS: 1 ChangeBit- 00:07:24.573 [2024-07-25 11:54:01.712319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.573 [2024-07-25 11:54:01.712348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.712387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.573 [2024-07-25 11:54:01.712403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.712456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.573 [2024-07-25 11:54:01.712487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.712541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.573 [2024-07-25 11:54:01.712557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.573 #45 NEW cov: 12308 ft: 15490 corp: 30/2072b lim: 90 exec/s: 45 rss: 74Mb L: 87/89 MS: 1 InsertRepeatedBytes- 00:07:24.573 [2024-07-25 11:54:01.752421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.573 [2024-07-25 11:54:01.752450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.752507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.573 [2024-07-25 11:54:01.752523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.752578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.573 [2024-07-25 11:54:01.752595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.752650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.573 [2024-07-25 11:54:01.752667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.573 #46 NEW cov: 12308 ft: 15492 corp: 31/2148b lim: 90 exec/s: 46 rss: 75Mb L: 76/89 MS: 1 CrossOver- 00:07:24.573 [2024-07-25 11:54:01.802595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.573 [2024-07-25 11:54:01.802624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.802662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.573 [2024-07-25 11:54:01.802678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.802731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.573 [2024-07-25 11:54:01.802751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.802804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.573 [2024-07-25 11:54:01.802819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.573 #47 NEW cov: 12308 ft: 15509 corp: 32/2237b lim: 90 exec/s: 47 rss: 75Mb L: 89/89 MS: 1 InsertByte- 00:07:24.573 [2024-07-25 11:54:01.842415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.573 [2024-07-25 11:54:01.842443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.573 [2024-07-25 11:54:01.842508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.573 [2024-07-25 11:54:01.842535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.833 #48 NEW cov: 12308 ft: 15522 corp: 33/2287b lim: 90 exec/s: 48 rss: 75Mb L: 50/89 MS: 1 ChangeBinInt- 00:07:24.833 [2024-07-25 11:54:01.892655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.833 [2024-07-25 11:54:01.892684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:01.892720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.833 [2024-07-25 11:54:01.892741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:01.892812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.833 [2024-07-25 11:54:01.892827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.833 #49 NEW cov: 12308 ft: 15561 corp: 34/2358b lim: 90 exec/s: 49 rss: 75Mb L: 71/89 MS: 1 ChangeBit- 00:07:24.833 [2024-07-25 11:54:01.932763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.833 [2024-07-25 11:54:01.932790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:01.932829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.833 [2024-07-25 11:54:01.932845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:01.932901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.833 [2024-07-25 11:54:01.932917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.833 #50 NEW cov: 12308 ft: 15580 corp: 35/2428b lim: 90 exec/s: 50 rss: 75Mb L: 70/89 MS: 1 ChangeByte- 00:07:24.833 [2024-07-25 11:54:01.972745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.833 [2024-07-25 11:54:01.972773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:01.972828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.833 [2024-07-25 11:54:01.972844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.833 #51 NEW cov: 12308 ft: 15618 corp: 36/2477b lim: 90 exec/s: 51 rss: 75Mb L: 49/89 MS: 1 EraseBytes- 00:07:24.833 [2024-07-25 11:54:02.023015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.833 [2024-07-25 11:54:02.023043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:02.023102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.833 [2024-07-25 11:54:02.023118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:02.023170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.833 [2024-07-25 11:54:02.023186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.833 #52 NEW cov: 12308 ft: 15621 corp: 37/2548b lim: 90 exec/s: 52 rss: 75Mb L: 71/89 MS: 1 ChangeBinInt- 00:07:24.833 [2024-07-25 11:54:02.073279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:24.833 [2024-07-25 11:54:02.073307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:02.073346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:24.833 [2024-07-25 11:54:02.073362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:02.073416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:24.833 [2024-07-25 11:54:02.073433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.833 [2024-07-25 11:54:02.073491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:24.833 [2024-07-25 11:54:02.073508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.833 #53 NEW cov: 12308 ft: 15677 corp: 38/2637b lim: 90 exec/s: 26 rss: 75Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:07:24.833 #53 DONE cov: 12308 ft: 15677 corp: 38/2637b lim: 90 exec/s: 26 rss: 75Mb 00:07:24.833 ###### Recommended dictionary. ###### 00:07:24.833 "\000\032\015\342\216R\224D" # Uses: 0 00:07:24.833 ###### End of recommended dictionary. ###### 00:07:24.833 Done 53 runs in 2 second(s) 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:25.092 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:25.093 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:25.093 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:25.093 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:25.093 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:25.093 11:54:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:25.093 [2024-07-25 11:54:02.292866] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:25.093 [2024-07-25 11:54:02.292942] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909652 ] 00:07:25.093 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.352 [2024-07-25 11:54:02.510879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.352 [2024-07-25 11:54:02.582763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.352 [2024-07-25 11:54:02.642560] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.611 [2024-07-25 11:54:02.658883] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:25.611 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.611 INFO: Seed: 2748440099 00:07:25.611 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:25.611 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:25.611 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:25.611 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.611 #2 INITED exec/s: 0 rss: 65Mb 00:07:25.611 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.611 This may also happen if the target rejected all inputs we tried so far 00:07:25.611 [2024-07-25 11:54:02.729927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:25.611 [2024-07-25 11:54:02.729971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.611 [2024-07-25 11:54:02.730064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:25.611 [2024-07-25 11:54:02.730086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.611 [2024-07-25 11:54:02.730192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:25.611 [2024-07-25 11:54:02.730210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.611 [2024-07-25 11:54:02.730320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:25.611 [2024-07-25 11:54:02.730337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.870 NEW_FUNC[1/702]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:25.870 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.870 #31 NEW cov: 12057 ft: 12058 corp: 2/49b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 4 InsertRepeatedBytes-CrossOver-CrossOver-InsertRepeatedBytes- 00:07:25.870 [2024-07-25 11:54:03.089620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:25.870 [2024-07-25 11:54:03.089666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.870 [2024-07-25 11:54:03.089776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:25.870 [2024-07-25 11:54:03.089800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.870 #37 NEW cov: 12170 ft: 13082 corp: 3/78b lim: 50 exec/s: 0 rss: 72Mb L: 29/48 MS: 1 CrossOver- 00:07:25.870 [2024-07-25 11:54:03.150491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:25.870 [2024-07-25 11:54:03.150524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.870 [2024-07-25 11:54:03.150586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:25.870 [2024-07-25 11:54:03.150607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.870 [2024-07-25 11:54:03.150670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:25.870 [2024-07-25 11:54:03.150688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.870 [2024-07-25 11:54:03.150786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:25.870 [2024-07-25 11:54:03.150806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.870 #38 NEW cov: 12176 ft: 13371 corp: 4/121b lim: 50 exec/s: 0 rss: 72Mb L: 43/48 MS: 1 InsertRepeatedBytes- 00:07:26.129 [2024-07-25 11:54:03.200738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.129 [2024-07-25 11:54:03.200770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.200855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.129 [2024-07-25 11:54:03.200878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.200941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.129 [2024-07-25 11:54:03.200959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.201056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.129 [2024-07-25 11:54:03.201074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.129 #39 NEW cov: 12261 ft: 13692 corp: 5/164b lim: 50 exec/s: 0 rss: 72Mb L: 43/48 MS: 1 ChangeBit- 00:07:26.129 [2024-07-25 11:54:03.270899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.129 [2024-07-25 11:54:03.270926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.271007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.129 [2024-07-25 11:54:03.271029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.271080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.129 [2024-07-25 11:54:03.271099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.271189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.129 [2024-07-25 11:54:03.271206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.129 #40 NEW cov: 12261 ft: 13845 corp: 6/213b lim: 50 exec/s: 0 rss: 72Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:26.129 [2024-07-25 11:54:03.341216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.129 [2024-07-25 11:54:03.341245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.341338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.129 [2024-07-25 11:54:03.341357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.341410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.129 [2024-07-25 11:54:03.341427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.341538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.129 [2024-07-25 11:54:03.341556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.129 #41 NEW cov: 12261 ft: 13909 corp: 7/262b lim: 50 exec/s: 0 rss: 72Mb L: 49/49 MS: 1 ChangeBit- 00:07:26.129 [2024-07-25 11:54:03.411814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.129 [2024-07-25 11:54:03.411845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.411927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.129 [2024-07-25 11:54:03.411947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.129 [2024-07-25 11:54:03.412019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.130 [2024-07-25 11:54:03.412037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.130 [2024-07-25 11:54:03.412138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.130 [2024-07-25 11:54:03.412158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.130 [2024-07-25 11:54:03.412250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:26.130 [2024-07-25 11:54:03.412271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.388 #42 NEW cov: 12261 ft: 14017 corp: 8/312b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 CrossOver- 00:07:26.388 [2024-07-25 11:54:03.481619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.388 [2024-07-25 11:54:03.481649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.481724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.388 [2024-07-25 11:54:03.481750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.481822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.388 [2024-07-25 11:54:03.481841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.481940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.388 [2024-07-25 11:54:03.481961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.388 #43 NEW cov: 12261 ft: 14072 corp: 9/361b lim: 50 exec/s: 0 rss: 72Mb L: 49/50 MS: 1 ShuffleBytes- 00:07:26.388 [2024-07-25 11:54:03.531872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.388 [2024-07-25 11:54:03.531902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.531981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.388 [2024-07-25 11:54:03.532002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.532075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.388 [2024-07-25 11:54:03.532091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.532188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.388 [2024-07-25 11:54:03.532206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.388 #49 NEW cov: 12261 ft: 14151 corp: 10/410b lim: 50 exec/s: 0 rss: 72Mb L: 49/50 MS: 1 ChangeByte- 00:07:26.388 [2024-07-25 11:54:03.582411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.388 [2024-07-25 11:54:03.582440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.582522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.388 [2024-07-25 11:54:03.582544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.582623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.388 [2024-07-25 11:54:03.582647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.582741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.388 [2024-07-25 11:54:03.582761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.582883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:26.388 [2024-07-25 11:54:03.582902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.388 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:26.388 #50 NEW cov: 12284 ft: 14207 corp: 11/460b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:26.388 [2024-07-25 11:54:03.652622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.388 [2024-07-25 11:54:03.652653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.652728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.388 [2024-07-25 11:54:03.652749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.652834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.388 [2024-07-25 11:54:03.652855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.652944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.388 [2024-07-25 11:54:03.652963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.388 [2024-07-25 11:54:03.653056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:26.388 [2024-07-25 11:54:03.653078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.388 #51 NEW cov: 12284 ft: 14241 corp: 12/510b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:26.647 [2024-07-25 11:54:03.721970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.647 [2024-07-25 11:54:03.722003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.722102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.647 [2024-07-25 11:54:03.722121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.647 #52 NEW cov: 12284 ft: 14280 corp: 13/531b lim: 50 exec/s: 52 rss: 73Mb L: 21/50 MS: 1 CrossOver- 00:07:26.647 [2024-07-25 11:54:03.772912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.647 [2024-07-25 11:54:03.772942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.772996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.647 [2024-07-25 11:54:03.773016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.773074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.647 [2024-07-25 11:54:03.773095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.773186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.647 [2024-07-25 11:54:03.773205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.647 #53 NEW cov: 12284 ft: 14331 corp: 14/579b lim: 50 exec/s: 53 rss: 73Mb L: 48/50 MS: 1 ShuffleBytes- 00:07:26.647 [2024-07-25 11:54:03.843104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.647 [2024-07-25 11:54:03.843136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.843211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.647 [2024-07-25 11:54:03.843227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.843312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.647 [2024-07-25 11:54:03.843331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.843425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.647 [2024-07-25 11:54:03.843443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.647 #54 NEW cov: 12284 ft: 14347 corp: 15/628b lim: 50 exec/s: 54 rss: 73Mb L: 49/50 MS: 1 CMP- DE: "\362z\020x\201\177\000\000"- 00:07:26.647 [2024-07-25 11:54:03.913381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.647 [2024-07-25 11:54:03.913412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.913483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.647 [2024-07-25 11:54:03.913502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.913587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.647 [2024-07-25 11:54:03.913607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.647 [2024-07-25 11:54:03.913710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.647 [2024-07-25 11:54:03.913729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.905 #55 NEW cov: 12284 ft: 14368 corp: 16/677b lim: 50 exec/s: 55 rss: 73Mb L: 49/50 MS: 1 ShuffleBytes- 00:07:26.905 [2024-07-25 11:54:03.983918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.905 [2024-07-25 11:54:03.983946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:03.984030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.906 [2024-07-25 11:54:03.984048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:03.984126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.906 [2024-07-25 11:54:03.984143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:03.984237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.906 [2024-07-25 11:54:03.984257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:03.984353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:26.906 [2024-07-25 11:54:03.984369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.906 #56 NEW cov: 12284 ft: 14430 corp: 17/727b lim: 50 exec/s: 56 rss: 73Mb L: 50/50 MS: 1 InsertByte- 00:07:26.906 [2024-07-25 11:54:04.054189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.906 [2024-07-25 11:54:04.054217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.054299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.906 [2024-07-25 11:54:04.054320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.054390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.906 [2024-07-25 11:54:04.054410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.054504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.906 [2024-07-25 11:54:04.054526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.054623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:26.906 [2024-07-25 11:54:04.054641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.906 #57 NEW cov: 12284 ft: 14457 corp: 18/777b lim: 50 exec/s: 57 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:07:26.906 [2024-07-25 11:54:04.124140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.906 [2024-07-25 11:54:04.124168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.124250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.906 [2024-07-25 11:54:04.124267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.124340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.906 [2024-07-25 11:54:04.124358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.124460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.906 [2024-07-25 11:54:04.124478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.906 #58 NEW cov: 12284 ft: 14487 corp: 19/820b lim: 50 exec/s: 58 rss: 73Mb L: 43/50 MS: 1 ChangeBinInt- 00:07:26.906 [2024-07-25 11:54:04.174330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:26.906 [2024-07-25 11:54:04.174358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.174455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:26.906 [2024-07-25 11:54:04.174474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.174556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:26.906 [2024-07-25 11:54:04.174580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.906 [2024-07-25 11:54:04.174672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:26.906 [2024-07-25 11:54:04.174689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.165 #59 NEW cov: 12284 ft: 14501 corp: 20/869b lim: 50 exec/s: 59 rss: 73Mb L: 49/50 MS: 1 ChangeBit- 00:07:27.165 [2024-07-25 11:54:04.234932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.165 [2024-07-25 11:54:04.234962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.235046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.165 [2024-07-25 11:54:04.235066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.235136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.165 [2024-07-25 11:54:04.235157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.235248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.165 [2024-07-25 11:54:04.235267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.235364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:27.165 [2024-07-25 11:54:04.235384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.165 #60 NEW cov: 12284 ft: 14517 corp: 21/919b lim: 50 exec/s: 60 rss: 73Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:27.165 [2024-07-25 11:54:04.305259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.165 [2024-07-25 11:54:04.305290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.305371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.165 [2024-07-25 11:54:04.305390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.305449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.165 [2024-07-25 11:54:04.305469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.305569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.165 [2024-07-25 11:54:04.305589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.305680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:27.165 [2024-07-25 11:54:04.305698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.165 #61 NEW cov: 12284 ft: 14567 corp: 22/969b lim: 50 exec/s: 61 rss: 73Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:27.165 [2024-07-25 11:54:04.355170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.165 [2024-07-25 11:54:04.355204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.355276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.165 [2024-07-25 11:54:04.355299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.355356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.165 [2024-07-25 11:54:04.355375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.355465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.165 [2024-07-25 11:54:04.355482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.165 #62 NEW cov: 12284 ft: 14585 corp: 23/1018b lim: 50 exec/s: 62 rss: 73Mb L: 49/50 MS: 1 CMP- DE: "\001\032\015\337vz\330z"- 00:07:27.165 [2024-07-25 11:54:04.404335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.165 [2024-07-25 11:54:04.404367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.165 #67 NEW cov: 12284 ft: 15352 corp: 24/1028b lim: 50 exec/s: 67 rss: 73Mb L: 10/50 MS: 5 CopyPart-CMP-ChangeBit-CopyPart-CMP- DE: "\377\011"-"\017\000\000\000"- 00:07:27.165 [2024-07-25 11:54:04.465571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.165 [2024-07-25 11:54:04.465603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.465680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.165 [2024-07-25 11:54:04.465696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.465799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.165 [2024-07-25 11:54:04.465818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.165 [2024-07-25 11:54:04.465918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.165 [2024-07-25 11:54:04.465952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.424 #68 NEW cov: 12284 ft: 15363 corp: 25/1077b lim: 50 exec/s: 68 rss: 73Mb L: 49/50 MS: 1 PersAutoDict- DE: "\001\032\015\337vz\330z"- 00:07:27.424 [2024-07-25 11:54:04.516236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.424 [2024-07-25 11:54:04.516268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.424 [2024-07-25 11:54:04.516340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.424 [2024-07-25 11:54:04.516358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.424 [2024-07-25 11:54:04.516414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.424 [2024-07-25 11:54:04.516431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.424 [2024-07-25 11:54:04.516525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.424 [2024-07-25 11:54:04.516543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.516639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:27.425 [2024-07-25 11:54:04.516659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.425 #69 NEW cov: 12284 ft: 15379 corp: 26/1127b lim: 50 exec/s: 69 rss: 73Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:27.425 [2024-07-25 11:54:04.586464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.425 [2024-07-25 11:54:04.586493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.586576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.425 [2024-07-25 11:54:04.586593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.586671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.425 [2024-07-25 11:54:04.586691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.586779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.425 [2024-07-25 11:54:04.586799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.586888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:27.425 [2024-07-25 11:54:04.586909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.425 #70 NEW cov: 12284 ft: 15380 corp: 27/1177b lim: 50 exec/s: 70 rss: 73Mb L: 50/50 MS: 1 ChangeByte- 00:07:27.425 [2024-07-25 11:54:04.656246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.425 [2024-07-25 11:54:04.656275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.656344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.425 [2024-07-25 11:54:04.656363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.656437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.425 [2024-07-25 11:54:04.656456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.656548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.425 [2024-07-25 11:54:04.656568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.425 #71 NEW cov: 12284 ft: 15402 corp: 28/1226b lim: 50 exec/s: 71 rss: 73Mb L: 49/50 MS: 1 CrossOver- 00:07:27.425 [2024-07-25 11:54:04.706907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:27.425 [2024-07-25 11:54:04.706934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.707026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:27.425 [2024-07-25 11:54:04.707044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.707123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:27.425 [2024-07-25 11:54:04.707139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.707221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:27.425 [2024-07-25 11:54:04.707244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.425 [2024-07-25 11:54:04.707334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:27.425 [2024-07-25 11:54:04.707352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.684 #72 NEW cov: 12284 ft: 15422 corp: 29/1276b lim: 50 exec/s: 36 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:07:27.684 #72 DONE cov: 12284 ft: 15422 corp: 29/1276b lim: 50 exec/s: 36 rss: 73Mb 00:07:27.684 ###### Recommended dictionary. ###### 00:07:27.684 "\362z\020x\201\177\000\000" # Uses: 0 00:07:27.684 "\001\032\015\337vz\330z" # Uses: 1 00:07:27.684 "\377\011" # Uses: 0 00:07:27.684 "\017\000\000\000" # Uses: 0 00:07:27.684 ###### End of recommended dictionary. ###### 00:07:27.684 Done 72 runs in 2 second(s) 00:07:27.684 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.684 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.684 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.684 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:27.684 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.685 11:54:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:27.685 [2024-07-25 11:54:04.917826] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:27.685 [2024-07-25 11:54:04.917901] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910182 ] 00:07:27.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.944 [2024-07-25 11:54:05.134416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.944 [2024-07-25 11:54:05.209763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.202 [2024-07-25 11:54:05.270135] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.202 [2024-07-25 11:54:05.286456] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:28.202 INFO: Running with entropic power schedule (0xFF, 100). 00:07:28.202 INFO: Seed: 1081484563 00:07:28.202 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:28.202 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:28.202 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:28.202 INFO: A corpus is not provided, starting from an empty corpus 00:07:28.202 #2 INITED exec/s: 0 rss: 65Mb 00:07:28.202 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:28.202 This may also happen if the target rejected all inputs we tried so far 00:07:28.202 [2024-07-25 11:54:05.334674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.202 [2024-07-25 11:54:05.334715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.202 [2024-07-25 11:54:05.334767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.202 [2024-07-25 11:54:05.334787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.461 NEW_FUNC[1/702]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:28.461 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.461 #12 NEW cov: 12077 ft: 12075 corp: 2/37b lim: 85 exec/s: 0 rss: 72Mb L: 36/36 MS: 5 CopyPart-CrossOver-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:28.461 [2024-07-25 11:54:05.728357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.461 [2024-07-25 11:54:05.728404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.461 [2024-07-25 11:54:05.728486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.461 [2024-07-25 11:54:05.728506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.461 [2024-07-25 11:54:05.728598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.461 [2024-07-25 11:54:05.728618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.461 [2024-07-25 11:54:05.728726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:28.461 [2024-07-25 11:54:05.728749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.461 #15 NEW cov: 12196 ft: 13080 corp: 3/121b lim: 85 exec/s: 0 rss: 72Mb L: 84/84 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:28.720 [2024-07-25 11:54:05.788419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.720 [2024-07-25 11:54:05.788453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.788519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.720 [2024-07-25 11:54:05.788539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.788620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.720 [2024-07-25 11:54:05.788640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.788740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:28.720 [2024-07-25 11:54:05.788759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.720 #16 NEW cov: 12202 ft: 13385 corp: 4/205b lim: 85 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 ChangeBinInt- 00:07:28.720 [2024-07-25 11:54:05.858707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.720 [2024-07-25 11:54:05.858743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.858804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.720 [2024-07-25 11:54:05.858823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.858888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.720 [2024-07-25 11:54:05.858907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.859009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:28.720 [2024-07-25 11:54:05.859027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.720 #17 NEW cov: 12287 ft: 13583 corp: 5/289b lim: 85 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 ChangeBit- 00:07:28.720 [2024-07-25 11:54:05.928917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.720 [2024-07-25 11:54:05.928948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.929029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.720 [2024-07-25 11:54:05.929050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.929115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.720 [2024-07-25 11:54:05.929136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.929230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:28.720 [2024-07-25 11:54:05.929250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.720 #18 NEW cov: 12287 ft: 13675 corp: 6/373b lim: 85 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 ChangeBit- 00:07:28.720 [2024-07-25 11:54:05.979205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.720 [2024-07-25 11:54:05.979235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.979314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.720 [2024-07-25 11:54:05.979332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.979406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.720 [2024-07-25 11:54:05.979426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.720 [2024-07-25 11:54:05.979533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:28.720 [2024-07-25 11:54:05.979553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.720 #19 NEW cov: 12287 ft: 13725 corp: 7/457b lim: 85 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 ChangeBit- 00:07:28.979 [2024-07-25 11:54:06.028754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.979 [2024-07-25 11:54:06.028788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.028889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.979 [2024-07-25 11:54:06.028911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.979 #20 NEW cov: 12287 ft: 13821 corp: 8/493b lim: 85 exec/s: 0 rss: 72Mb L: 36/84 MS: 1 CopyPart- 00:07:28.979 [2024-07-25 11:54:06.099632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.979 [2024-07-25 11:54:06.099661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.099748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.979 [2024-07-25 11:54:06.099765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.099823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.979 [2024-07-25 11:54:06.099840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.099937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:28.979 [2024-07-25 11:54:06.099955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.979 #21 NEW cov: 12287 ft: 13838 corp: 9/577b lim: 85 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 ChangeByte- 00:07:28.979 [2024-07-25 11:54:06.149491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.979 [2024-07-25 11:54:06.149521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.149597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.979 [2024-07-25 11:54:06.149618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.149687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.979 [2024-07-25 11:54:06.149705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.979 #22 NEW cov: 12287 ft: 14115 corp: 10/631b lim: 85 exec/s: 0 rss: 72Mb L: 54/84 MS: 1 EraseBytes- 00:07:28.979 [2024-07-25 11:54:06.219432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.979 [2024-07-25 11:54:06.219458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.219526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.979 [2024-07-25 11:54:06.219544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.979 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.979 #23 NEW cov: 12304 ft: 14190 corp: 11/667b lim: 85 exec/s: 0 rss: 72Mb L: 36/84 MS: 1 CopyPart- 00:07:28.979 [2024-07-25 11:54:06.270255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:28.979 [2024-07-25 11:54:06.270283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.270377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:28.979 [2024-07-25 11:54:06.270397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.979 [2024-07-25 11:54:06.270451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:28.979 [2024-07-25 11:54:06.270471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.238 #24 NEW cov: 12304 ft: 14215 corp: 12/729b lim: 85 exec/s: 0 rss: 72Mb L: 62/84 MS: 1 InsertRepeatedBytes- 00:07:29.238 [2024-07-25 11:54:06.340910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.238 [2024-07-25 11:54:06.340938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.238 [2024-07-25 11:54:06.341012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.238 [2024-07-25 11:54:06.341029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.238 [2024-07-25 11:54:06.341109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:29.238 [2024-07-25 11:54:06.341127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.238 [2024-07-25 11:54:06.341219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:29.238 [2024-07-25 11:54:06.341237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.238 #25 NEW cov: 12304 ft: 14229 corp: 13/813b lim: 85 exec/s: 25 rss: 72Mb L: 84/84 MS: 1 ShuffleBytes- 00:07:29.238 [2024-07-25 11:54:06.390889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.238 [2024-07-25 11:54:06.390918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.238 [2024-07-25 11:54:06.390993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.238 [2024-07-25 11:54:06.391014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.238 [2024-07-25 11:54:06.391092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:29.238 [2024-07-25 11:54:06.391114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.239 #26 NEW cov: 12304 ft: 14249 corp: 14/867b lim: 85 exec/s: 26 rss: 73Mb L: 54/84 MS: 1 ChangeByte- 00:07:29.239 [2024-07-25 11:54:06.461829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.239 [2024-07-25 11:54:06.461863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.239 [2024-07-25 11:54:06.461931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.239 [2024-07-25 11:54:06.461947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.239 [2024-07-25 11:54:06.462014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:29.239 [2024-07-25 11:54:06.462032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.239 [2024-07-25 11:54:06.462134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:29.239 [2024-07-25 11:54:06.462155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.239 #27 NEW cov: 12304 ft: 14264 corp: 15/951b lim: 85 exec/s: 27 rss: 73Mb L: 84/84 MS: 1 CrossOver- 00:07:29.239 [2024-07-25 11:54:06.532081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.239 [2024-07-25 11:54:06.532113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.239 [2024-07-25 11:54:06.532174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.239 [2024-07-25 11:54:06.532194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.239 [2024-07-25 11:54:06.532251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:29.239 [2024-07-25 11:54:06.532272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.239 [2024-07-25 11:54:06.532365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:29.239 [2024-07-25 11:54:06.532383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.498 #28 NEW cov: 12304 ft: 14355 corp: 16/1023b lim: 85 exec/s: 28 rss: 73Mb L: 72/84 MS: 1 EraseBytes- 00:07:29.498 [2024-07-25 11:54:06.581821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.498 [2024-07-25 11:54:06.581852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.581923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.498 [2024-07-25 11:54:06.581942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.498 #29 NEW cov: 12304 ft: 14372 corp: 17/1059b lim: 85 exec/s: 29 rss: 73Mb L: 36/84 MS: 1 ShuffleBytes- 00:07:29.498 [2024-07-25 11:54:06.652896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.498 [2024-07-25 11:54:06.652925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.652996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.498 [2024-07-25 11:54:06.653016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.653076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:29.498 [2024-07-25 11:54:06.653094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.653191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:29.498 [2024-07-25 11:54:06.653209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.498 #30 NEW cov: 12304 ft: 14403 corp: 18/1143b lim: 85 exec/s: 30 rss: 73Mb L: 84/84 MS: 1 ChangeBit- 00:07:29.498 [2024-07-25 11:54:06.722538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.498 [2024-07-25 11:54:06.722570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.722674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.498 [2024-07-25 11:54:06.722692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.498 #31 NEW cov: 12304 ft: 14415 corp: 19/1179b lim: 85 exec/s: 31 rss: 73Mb L: 36/84 MS: 1 CopyPart- 00:07:29.498 [2024-07-25 11:54:06.793567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.498 [2024-07-25 11:54:06.793602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.793663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.498 [2024-07-25 11:54:06.793685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.793752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:29.498 [2024-07-25 11:54:06.793771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.498 [2024-07-25 11:54:06.793870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:29.498 [2024-07-25 11:54:06.793891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.757 #32 NEW cov: 12304 ft: 14466 corp: 20/1263b lim: 85 exec/s: 32 rss: 73Mb L: 84/84 MS: 1 ChangeBinInt- 00:07:29.757 [2024-07-25 11:54:06.863865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.757 [2024-07-25 11:54:06.863892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.757 [2024-07-25 11:54:06.863986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.757 [2024-07-25 11:54:06.864004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.757 [2024-07-25 11:54:06.864084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:29.757 [2024-07-25 11:54:06.864103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.757 [2024-07-25 11:54:06.864196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:29.757 [2024-07-25 11:54:06.864213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.757 #33 NEW cov: 12304 ft: 14486 corp: 21/1339b lim: 85 exec/s: 33 rss: 73Mb L: 76/84 MS: 1 InsertRepeatedBytes- 00:07:29.757 [2024-07-25 11:54:06.913078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.757 [2024-07-25 11:54:06.913107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.757 #34 NEW cov: 12304 ft: 15328 corp: 22/1370b lim: 85 exec/s: 34 rss: 73Mb L: 31/84 MS: 1 EraseBytes- 00:07:29.757 [2024-07-25 11:54:06.983649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.757 [2024-07-25 11:54:06.983675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.757 [2024-07-25 11:54:06.983740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.757 [2024-07-25 11:54:06.983758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.757 #35 NEW cov: 12304 ft: 15374 corp: 23/1406b lim: 85 exec/s: 35 rss: 73Mb L: 36/84 MS: 1 CrossOver- 00:07:29.757 [2024-07-25 11:54:07.033878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:29.757 [2024-07-25 11:54:07.033906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.757 [2024-07-25 11:54:07.033966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:29.757 [2024-07-25 11:54:07.033985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.757 #36 NEW cov: 12304 ft: 15433 corp: 24/1442b lim: 85 exec/s: 36 rss: 73Mb L: 36/84 MS: 1 CrossOver- 00:07:30.016 [2024-07-25 11:54:07.084812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.016 [2024-07-25 11:54:07.084839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.084923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.016 [2024-07-25 11:54:07.084939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.085030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.016 [2024-07-25 11:54:07.085045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.085138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:30.016 [2024-07-25 11:54:07.085153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.016 #37 NEW cov: 12304 ft: 15448 corp: 25/1526b lim: 85 exec/s: 37 rss: 73Mb L: 84/84 MS: 1 ChangeBinInt- 00:07:30.016 [2024-07-25 11:54:07.144684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.016 [2024-07-25 11:54:07.144709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.144751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.016 [2024-07-25 11:54:07.144762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.144826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.016 [2024-07-25 11:54:07.144846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.016 #38 NEW cov: 12304 ft: 15457 corp: 26/1588b lim: 85 exec/s: 38 rss: 73Mb L: 62/84 MS: 1 ChangeByte- 00:07:30.016 [2024-07-25 11:54:07.204674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.016 [2024-07-25 11:54:07.204702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.204798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.016 [2024-07-25 11:54:07.204816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.016 #39 NEW cov: 12311 ft: 15501 corp: 27/1624b lim: 85 exec/s: 39 rss: 73Mb L: 36/84 MS: 1 ShuffleBytes- 00:07:30.016 [2024-07-25 11:54:07.255430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.016 [2024-07-25 11:54:07.255456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.255549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.016 [2024-07-25 11:54:07.255565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.255650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.016 [2024-07-25 11:54:07.255668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.255764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:30.016 [2024-07-25 11:54:07.255783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.016 #40 NEW cov: 12311 ft: 15579 corp: 28/1692b lim: 85 exec/s: 40 rss: 73Mb L: 68/84 MS: 1 CopyPart- 00:07:30.016 [2024-07-25 11:54:07.305715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.016 [2024-07-25 11:54:07.305745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.305837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.016 [2024-07-25 11:54:07.305856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.305948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.016 [2024-07-25 11:54:07.305968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.016 [2024-07-25 11:54:07.306063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:30.016 [2024-07-25 11:54:07.306084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.274 [2024-07-25 11:54:07.356105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:30.274 [2024-07-25 11:54:07.356134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.274 [2024-07-25 11:54:07.356235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:30.274 [2024-07-25 11:54:07.356271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.274 [2024-07-25 11:54:07.356327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:30.274 [2024-07-25 11:54:07.356345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.274 [2024-07-25 11:54:07.356439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:30.274 [2024-07-25 11:54:07.356455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.274 [2024-07-25 11:54:07.356543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:30.274 [2024-07-25 11:54:07.356561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:30.274 #42 NEW cov: 12311 ft: 15623 corp: 29/1777b lim: 85 exec/s: 21 rss: 73Mb L: 85/85 MS: 2 ShuffleBytes-CopyPart- 00:07:30.274 #42 DONE cov: 12311 ft: 15623 corp: 29/1777b lim: 85 exec/s: 21 rss: 73Mb 00:07:30.274 Done 42 runs in 2 second(s) 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:30.274 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:30.275 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:30.275 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:30.275 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:30.275 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:30.275 11:54:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:30.275 [2024-07-25 11:54:07.546218] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:30.275 [2024-07-25 11:54:07.546302] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910787 ] 00:07:30.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.534 [2024-07-25 11:54:07.758768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.534 [2024-07-25 11:54:07.829406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.792 [2024-07-25 11:54:07.889647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.792 [2024-07-25 11:54:07.905973] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:30.792 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.793 INFO: Seed: 3703476179 00:07:30.793 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:30.793 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:30.793 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:30.793 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.793 #2 INITED exec/s: 0 rss: 64Mb 00:07:30.793 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.793 This may also happen if the target rejected all inputs we tried so far 00:07:30.793 [2024-07-25 11:54:07.982550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:30.793 [2024-07-25 11:54:07.982592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.051 NEW_FUNC[1/701]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:31.051 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:31.051 #4 NEW cov: 11991 ft: 11992 corp: 2/7b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:31.051 [2024-07-25 11:54:08.334401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.051 [2024-07-25 11:54:08.334451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.051 [2024-07-25 11:54:08.334549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.051 [2024-07-25 11:54:08.334572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.051 [2024-07-25 11:54:08.334680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.051 [2024-07-25 11:54:08.334707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.051 [2024-07-25 11:54:08.334819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:31.051 [2024-07-25 11:54:08.334842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.310 #5 NEW cov: 12128 ft: 13176 corp: 3/30b lim: 25 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:31.310 [2024-07-25 11:54:08.404305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.310 [2024-07-25 11:54:08.404334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.310 [2024-07-25 11:54:08.404412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.310 [2024-07-25 11:54:08.404431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.310 [2024-07-25 11:54:08.404491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.310 [2024-07-25 11:54:08.404510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.310 #8 NEW cov: 12134 ft: 13656 corp: 4/46b lim: 25 exec/s: 0 rss: 72Mb L: 16/23 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:31.310 [2024-07-25 11:54:08.454258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.310 [2024-07-25 11:54:08.454287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.310 [2024-07-25 11:54:08.454352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.310 [2024-07-25 11:54:08.454370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.310 #9 NEW cov: 12219 ft: 14079 corp: 5/56b lim: 25 exec/s: 0 rss: 72Mb L: 10/23 MS: 1 EraseBytes- 00:07:31.310 [2024-07-25 11:54:08.514232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.310 [2024-07-25 11:54:08.514259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.310 #10 NEW cov: 12219 ft: 14162 corp: 6/63b lim: 25 exec/s: 0 rss: 72Mb L: 7/23 MS: 1 CopyPart- 00:07:31.310 [2024-07-25 11:54:08.565055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.310 [2024-07-25 11:54:08.565083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.310 [2024-07-25 11:54:08.565174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.310 [2024-07-25 11:54:08.565194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.310 [2024-07-25 11:54:08.565255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.310 [2024-07-25 11:54:08.565274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.310 #11 NEW cov: 12219 ft: 14226 corp: 7/78b lim: 25 exec/s: 0 rss: 72Mb L: 15/23 MS: 1 CMP- DE: "\033\240!O\036\362\345\377"- 00:07:31.642 [2024-07-25 11:54:08.635019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.642 [2024-07-25 11:54:08.635049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.635145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.642 [2024-07-25 11:54:08.635164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.642 #12 NEW cov: 12219 ft: 14306 corp: 8/92b lim: 25 exec/s: 0 rss: 72Mb L: 14/23 MS: 1 PersAutoDict- DE: "\033\240!O\036\362\345\377"- 00:07:31.642 [2024-07-25 11:54:08.685234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.642 [2024-07-25 11:54:08.685261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.642 #13 NEW cov: 12219 ft: 14379 corp: 9/98b lim: 25 exec/s: 0 rss: 73Mb L: 6/23 MS: 1 CopyPart- 00:07:31.642 [2024-07-25 11:54:08.735729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.642 [2024-07-25 11:54:08.735760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.735844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.642 [2024-07-25 11:54:08.735864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.642 #14 NEW cov: 12219 ft: 14426 corp: 10/112b lim: 25 exec/s: 0 rss: 73Mb L: 14/23 MS: 1 ChangeBinInt- 00:07:31.642 [2024-07-25 11:54:08.796864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.642 [2024-07-25 11:54:08.796890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.796997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.642 [2024-07-25 11:54:08.797018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.797104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.642 [2024-07-25 11:54:08.797125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.797228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:31.642 [2024-07-25 11:54:08.797245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.797335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:31.642 [2024-07-25 11:54:08.797351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:31.642 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:31.642 #15 NEW cov: 12242 ft: 14489 corp: 11/137b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CrossOver- 00:07:31.642 [2024-07-25 11:54:08.866066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.642 [2024-07-25 11:54:08.866093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.642 #16 NEW cov: 12242 ft: 14512 corp: 12/144b lim: 25 exec/s: 0 rss: 73Mb L: 7/25 MS: 1 CrossOver- 00:07:31.642 [2024-07-25 11:54:08.917036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.642 [2024-07-25 11:54:08.917064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.917169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.642 [2024-07-25 11:54:08.917206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.917265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.642 [2024-07-25 11:54:08.917282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.642 [2024-07-25 11:54:08.917388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:31.642 [2024-07-25 11:54:08.917408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.916 #17 NEW cov: 12242 ft: 14560 corp: 13/166b lim: 25 exec/s: 17 rss: 73Mb L: 22/25 MS: 1 EraseBytes- 00:07:31.916 [2024-07-25 11:54:08.987814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.916 [2024-07-25 11:54:08.987841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:08.987944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.916 [2024-07-25 11:54:08.987966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:08.988057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.916 [2024-07-25 11:54:08.988077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:08.988180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:31.916 [2024-07-25 11:54:08.988196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:08.988294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:31.916 [2024-07-25 11:54:08.988313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:31.916 #18 NEW cov: 12242 ft: 14621 corp: 14/191b lim: 25 exec/s: 18 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:07:31.916 [2024-07-25 11:54:09.037709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.916 [2024-07-25 11:54:09.037740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:09.037827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.916 [2024-07-25 11:54:09.037846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:09.037912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.916 [2024-07-25 11:54:09.037931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.916 #19 NEW cov: 12242 ft: 14653 corp: 15/207b lim: 25 exec/s: 19 rss: 73Mb L: 16/25 MS: 1 InsertRepeatedBytes- 00:07:31.916 [2024-07-25 11:54:09.088480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.916 [2024-07-25 11:54:09.088507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:09.088587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.916 [2024-07-25 11:54:09.088604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:09.088687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.916 [2024-07-25 11:54:09.088706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:09.088831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:31.916 [2024-07-25 11:54:09.088849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.916 #20 NEW cov: 12242 ft: 14713 corp: 16/229b lim: 25 exec/s: 20 rss: 73Mb L: 22/25 MS: 1 InsertRepeatedBytes- 00:07:31.916 [2024-07-25 11:54:09.148033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.916 [2024-07-25 11:54:09.148062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.916 #21 NEW cov: 12242 ft: 14744 corp: 17/238b lim: 25 exec/s: 21 rss: 73Mb L: 9/25 MS: 1 PersAutoDict- DE: "\033\240!O\036\362\345\377"- 00:07:31.916 [2024-07-25 11:54:09.198701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:31.916 [2024-07-25 11:54:09.198733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.916 [2024-07-25 11:54:09.198820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:31.916 [2024-07-25 11:54:09.198837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.917 [2024-07-25 11:54:09.198902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:31.917 [2024-07-25 11:54:09.198921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.174 #22 NEW cov: 12242 ft: 14763 corp: 18/254b lim: 25 exec/s: 22 rss: 73Mb L: 16/25 MS: 1 ShuffleBytes- 00:07:32.174 [2024-07-25 11:54:09.268497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.174 [2024-07-25 11:54:09.268528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.174 #23 NEW cov: 12242 ft: 14791 corp: 19/260b lim: 25 exec/s: 23 rss: 73Mb L: 6/25 MS: 1 ChangeBit- 00:07:32.174 [2024-07-25 11:54:09.339046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.174 [2024-07-25 11:54:09.339077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.174 [2024-07-25 11:54:09.339152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.174 [2024-07-25 11:54:09.339171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.174 #24 NEW cov: 12242 ft: 14805 corp: 20/274b lim: 25 exec/s: 24 rss: 73Mb L: 14/25 MS: 1 EraseBytes- 00:07:32.174 [2024-07-25 11:54:09.409315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.174 [2024-07-25 11:54:09.409343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.174 [2024-07-25 11:54:09.409410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.174 [2024-07-25 11:54:09.409433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.174 #25 NEW cov: 12242 ft: 14840 corp: 21/286b lim: 25 exec/s: 25 rss: 73Mb L: 12/25 MS: 1 InsertRepeatedBytes- 00:07:32.431 [2024-07-25 11:54:09.479901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.431 [2024-07-25 11:54:09.479931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.480007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.431 [2024-07-25 11:54:09.480024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.480091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:32.431 [2024-07-25 11:54:09.480109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.431 #26 NEW cov: 12242 ft: 14860 corp: 22/301b lim: 25 exec/s: 26 rss: 73Mb L: 15/25 MS: 1 EraseBytes- 00:07:32.431 [2024-07-25 11:54:09.530319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.431 [2024-07-25 11:54:09.530350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.530431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.431 [2024-07-25 11:54:09.530452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.530512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:32.431 [2024-07-25 11:54:09.530532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.530639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:32.431 [2024-07-25 11:54:09.530658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.431 #27 NEW cov: 12242 ft: 14875 corp: 23/324b lim: 25 exec/s: 27 rss: 73Mb L: 23/25 MS: 1 ChangeByte- 00:07:32.431 [2024-07-25 11:54:09.579760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.431 [2024-07-25 11:54:09.579785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.431 #28 NEW cov: 12242 ft: 14888 corp: 24/330b lim: 25 exec/s: 28 rss: 73Mb L: 6/25 MS: 1 ChangeBit- 00:07:32.431 [2024-07-25 11:54:09.630972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.431 [2024-07-25 11:54:09.631003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.631093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.431 [2024-07-25 11:54:09.631113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.631183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:32.431 [2024-07-25 11:54:09.631200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.631301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:32.431 [2024-07-25 11:54:09.631317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.431 [2024-07-25 11:54:09.631417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:32.432 [2024-07-25 11:54:09.631437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:32.432 #29 NEW cov: 12242 ft: 14920 corp: 25/355b lim: 25 exec/s: 29 rss: 73Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:32.432 [2024-07-25 11:54:09.680232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.432 [2024-07-25 11:54:09.680265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.432 [2024-07-25 11:54:09.680377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.432 [2024-07-25 11:54:09.680392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.432 #30 NEW cov: 12242 ft: 14941 corp: 26/369b lim: 25 exec/s: 30 rss: 73Mb L: 14/25 MS: 1 InsertRepeatedBytes- 00:07:32.432 [2024-07-25 11:54:09.731134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.432 [2024-07-25 11:54:09.731162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.432 [2024-07-25 11:54:09.731257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.432 [2024-07-25 11:54:09.731277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.432 [2024-07-25 11:54:09.731358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:32.432 [2024-07-25 11:54:09.731381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.432 [2024-07-25 11:54:09.731483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:32.432 [2024-07-25 11:54:09.731499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.689 #31 NEW cov: 12242 ft: 14994 corp: 27/391b lim: 25 exec/s: 31 rss: 73Mb L: 22/25 MS: 1 ChangeBinInt- 00:07:32.689 [2024-07-25 11:54:09.790466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.689 [2024-07-25 11:54:09.790493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.689 #32 NEW cov: 12242 ft: 15020 corp: 28/397b lim: 25 exec/s: 32 rss: 73Mb L: 6/25 MS: 1 CopyPart- 00:07:32.690 [2024-07-25 11:54:09.841023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.690 [2024-07-25 11:54:09.841051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.690 [2024-07-25 11:54:09.841125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.690 [2024-07-25 11:54:09.841144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.690 #33 NEW cov: 12242 ft: 15063 corp: 29/411b lim: 25 exec/s: 33 rss: 73Mb L: 14/25 MS: 1 ChangeByte- 00:07:32.690 [2024-07-25 11:54:09.890869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.690 [2024-07-25 11:54:09.890896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.690 #34 NEW cov: 12242 ft: 15075 corp: 30/418b lim: 25 exec/s: 34 rss: 73Mb L: 7/25 MS: 1 InsertByte- 00:07:32.690 [2024-07-25 11:54:09.941704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:32.690 [2024-07-25 11:54:09.941730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.690 [2024-07-25 11:54:09.941815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:32.690 [2024-07-25 11:54:09.941835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.690 [2024-07-25 11:54:09.941926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:32.690 [2024-07-25 11:54:09.941947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.690 #35 NEW cov: 12242 ft: 15100 corp: 31/433b lim: 25 exec/s: 17 rss: 73Mb L: 15/25 MS: 1 CrossOver- 00:07:32.690 #35 DONE cov: 12242 ft: 15100 corp: 31/433b lim: 25 exec/s: 17 rss: 73Mb 00:07:32.690 ###### Recommended dictionary. ###### 00:07:32.690 "\033\240!O\036\362\345\377" # Uses: 2 00:07:32.690 ###### End of recommended dictionary. ###### 00:07:32.690 Done 35 runs in 2 second(s) 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.948 11:54:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:32.948 [2024-07-25 11:54:10.136667] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:32.948 [2024-07-25 11:54:10.136732] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911161 ] 00:07:32.948 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.207 [2024-07-25 11:54:10.343014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.207 [2024-07-25 11:54:10.413861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.207 [2024-07-25 11:54:10.473313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.207 [2024-07-25 11:54:10.489593] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:33.207 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.207 INFO: Seed: 1989487786 00:07:33.467 INFO: Loaded 1 modules (359061 inline 8-bit counters): 359061 [0x29c768c, 0x2a1f121), 00:07:33.467 INFO: Loaded 1 PC tables (359061 PCs): 359061 [0x2a1f128,0x2f99a78), 00:07:33.467 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:33.467 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.467 #2 INITED exec/s: 0 rss: 65Mb 00:07:33.467 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.467 This may also happen if the target rejected all inputs we tried so far 00:07:33.467 [2024-07-25 11:54:10.548583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.467 [2024-07-25 11:54:10.548613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.467 [2024-07-25 11:54:10.548656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.467 [2024-07-25 11:54:10.548673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.467 [2024-07-25 11:54:10.548729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.467 [2024-07-25 11:54:10.548750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.467 [2024-07-25 11:54:10.548806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.467 [2024-07-25 11:54:10.548822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.726 NEW_FUNC[1/702]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:33.726 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.727 #5 NEW cov: 12088 ft: 12086 corp: 2/88b lim: 100 exec/s: 0 rss: 72Mb L: 87/87 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:33.727 [2024-07-25 11:54:10.899409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.899456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.727 [2024-07-25 11:54:10.899519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.899540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.727 [2024-07-25 11:54:10.899601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.899622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.727 [2024-07-25 11:54:10.899683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.899703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.727 #6 NEW cov: 12201 ft: 12827 corp: 3/176b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 CrossOver- 00:07:33.727 [2024-07-25 11:54:10.958965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.958995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.727 #10 NEW cov: 12207 ft: 13926 corp: 4/215b lim: 100 exec/s: 0 rss: 72Mb L: 39/88 MS: 4 ChangeBinInt-ShuffleBytes-CopyPart-CrossOver- 00:07:33.727 [2024-07-25 11:54:10.999476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.999508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.727 [2024-07-25 11:54:10.999545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.999561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.727 [2024-07-25 11:54:10.999614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.999631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.727 [2024-07-25 11:54:10.999684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.727 [2024-07-25 11:54:10.999699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.986 #11 NEW cov: 12292 ft: 14247 corp: 5/306b lim: 100 exec/s: 0 rss: 72Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:07:33.986 [2024-07-25 11:54:11.049616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.049644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.049685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.049700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.049753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.049786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.049848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.049862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.986 #14 NEW cov: 12292 ft: 14338 corp: 6/405b lim: 100 exec/s: 0 rss: 72Mb L: 99/99 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:07:33.986 [2024-07-25 11:54:11.089461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.089490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.089537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.089554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.986 #20 NEW cov: 12292 ft: 14742 corp: 7/460b lim: 100 exec/s: 0 rss: 73Mb L: 55/99 MS: 1 EraseBytes- 00:07:33.986 [2024-07-25 11:54:11.139820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.139849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.139893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.139913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.139966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.139983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.140034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.140051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.986 #21 NEW cov: 12292 ft: 14790 corp: 8/559b lim: 100 exec/s: 0 rss: 73Mb L: 99/99 MS: 1 CopyPart- 00:07:33.986 [2024-07-25 11:54:11.179775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.179804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.179838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.179855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.179910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.179927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.986 #22 NEW cov: 12292 ft: 15076 corp: 9/631b lim: 100 exec/s: 0 rss: 73Mb L: 72/99 MS: 1 EraseBytes- 00:07:33.986 [2024-07-25 11:54:11.219771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.219799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.219835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029724000654421871 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.219851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.986 #23 NEW cov: 12292 ft: 15098 corp: 10/686b lim: 100 exec/s: 0 rss: 73Mb L: 55/99 MS: 1 ChangeBit- 00:07:33.986 [2024-07-25 11:54:11.270216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.986 [2024-07-25 11:54:11.270244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.986 [2024-07-25 11:54:11.270287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.987 [2024-07-25 11:54:11.270303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.987 [2024-07-25 11:54:11.270354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.987 [2024-07-25 11:54:11.270370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.987 [2024-07-25 11:54:11.270421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.987 [2024-07-25 11:54:11.270439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.245 #24 NEW cov: 12292 ft: 15127 corp: 11/782b lim: 100 exec/s: 0 rss: 73Mb L: 96/99 MS: 1 InsertRepeatedBytes- 00:07:34.245 [2024-07-25 11:54:11.320175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.320204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.320240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:751942187195789167 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.320256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.320309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.320326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.245 #25 NEW cov: 12292 ft: 15159 corp: 12/853b lim: 100 exec/s: 0 rss: 73Mb L: 71/99 MS: 1 CrossOver- 00:07:34.245 [2024-07-25 11:54:11.360453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.360481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.360521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.360536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.360590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.360605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.360657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.360673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.245 #26 NEW cov: 12292 ft: 15194 corp: 13/950b lim: 100 exec/s: 0 rss: 73Mb L: 97/99 MS: 1 InsertByte- 00:07:34.245 [2024-07-25 11:54:11.410574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.410602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.410638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.410655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.410711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.410728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.410786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.410802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.245 NEW_FUNC[1/1]: 0x1a8a050 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:34.245 #27 NEW cov: 12315 ft: 15263 corp: 14/1042b lim: 100 exec/s: 0 rss: 73Mb L: 92/99 MS: 1 CMP- DE: "\005\000\000\000"- 00:07:34.245 [2024-07-25 11:54:11.450519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.450548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.450585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:751942187195789167 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.450601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.450653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.450670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.245 #28 NEW cov: 12315 ft: 15285 corp: 15/1109b lim: 100 exec/s: 0 rss: 73Mb L: 67/99 MS: 1 EraseBytes- 00:07:34.245 [2024-07-25 11:54:11.500560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.500588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.245 [2024-07-25 11:54:11.500640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029724000654421871 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.245 [2024-07-25 11:54:11.500656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.245 #29 NEW cov: 12315 ft: 15294 corp: 16/1164b lim: 100 exec/s: 29 rss: 73Mb L: 55/99 MS: 1 PersAutoDict- DE: "\005\000\000\000"- 00:07:34.504 [2024-07-25 11:54:11.551009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.551037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.551078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.551095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.551148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.551163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.551219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8028070335166246767 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.551234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.504 #30 NEW cov: 12315 ft: 15362 corp: 17/1251b lim: 100 exec/s: 30 rss: 73Mb L: 87/99 MS: 1 ChangeBinInt- 00:07:34.504 [2024-07-25 11:54:11.590812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.590845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.590893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029724000654421871 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.590909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.504 #31 NEW cov: 12315 ft: 15375 corp: 18/1307b lim: 100 exec/s: 31 rss: 73Mb L: 56/99 MS: 1 InsertByte- 00:07:34.504 [2024-07-25 11:54:11.641253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.641282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.641323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.641340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.641392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.641409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.641460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.641475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.504 #32 NEW cov: 12315 ft: 15396 corp: 19/1403b lim: 100 exec/s: 32 rss: 73Mb L: 96/99 MS: 1 ChangeBit- 00:07:34.504 [2024-07-25 11:54:11.681313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.681341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.681383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28417 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.681398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.681451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.681468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.681519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8028070335166246767 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.681534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.504 #33 NEW cov: 12315 ft: 15407 corp: 20/1490b lim: 100 exec/s: 33 rss: 73Mb L: 87/99 MS: 1 ChangeBinInt- 00:07:34.504 [2024-07-25 11:54:11.731195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.731224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.504 [2024-07-25 11:54:11.731282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.731299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.504 #34 NEW cov: 12315 ft: 15409 corp: 21/1545b lim: 100 exec/s: 34 rss: 73Mb L: 55/99 MS: 1 ShuffleBytes- 00:07:34.504 [2024-07-25 11:54:11.771174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.504 [2024-07-25 11:54:11.771202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.763 #35 NEW cov: 12315 ft: 15417 corp: 22/1584b lim: 100 exec/s: 35 rss: 73Mb L: 39/99 MS: 1 EraseBytes- 00:07:34.763 [2024-07-25 11:54:11.821287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.763 [2024-07-25 11:54:11.821314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.763 #36 NEW cov: 12315 ft: 15516 corp: 23/1607b lim: 100 exec/s: 36 rss: 74Mb L: 23/99 MS: 1 EraseBytes- 00:07:34.763 [2024-07-25 11:54:11.871851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.763 [2024-07-25 11:54:11.871883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.763 [2024-07-25 11:54:11.871919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.763 [2024-07-25 11:54:11.871935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.763 [2024-07-25 11:54:11.871987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.763 [2024-07-25 11:54:11.872002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.763 [2024-07-25 11:54:11.872055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.763 [2024-07-25 11:54:11.872070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.763 #37 NEW cov: 12315 ft: 15540 corp: 24/1701b lim: 100 exec/s: 37 rss: 74Mb L: 94/99 MS: 1 EraseBytes- 00:07:34.763 [2024-07-25 11:54:11.921820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.763 [2024-07-25 11:54:11.921849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:11.921886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:751942187195789167 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:11.921902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:11.921955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759187442429807 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:11.921971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.764 #38 NEW cov: 12315 ft: 15544 corp: 25/1772b lim: 100 exec/s: 38 rss: 74Mb L: 71/99 MS: 1 CrossOver- 00:07:34.764 [2024-07-25 11:54:11.961936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:11.961967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:11.962006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:11.962022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:11.962073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:11.962088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.764 #39 NEW cov: 12315 ft: 15570 corp: 26/1844b lim: 100 exec/s: 39 rss: 74Mb L: 72/99 MS: 1 ChangeBit- 00:07:34.764 [2024-07-25 11:54:12.002233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:12.002261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:12.002303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28417 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:12.002318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:12.002371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:12.002386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:12.002439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28522 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:12.002457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.764 #40 NEW cov: 12315 ft: 15572 corp: 27/1937b lim: 100 exec/s: 40 rss: 74Mb L: 93/99 MS: 1 CopyPart- 00:07:34.764 [2024-07-25 11:54:12.052207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:12.052234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:12.052283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:12.052299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.764 [2024-07-25 11:54:12.052350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.764 [2024-07-25 11:54:12.052365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.023 #41 NEW cov: 12315 ft: 15578 corp: 28/2016b lim: 100 exec/s: 41 rss: 74Mb L: 79/99 MS: 1 CrossOver- 00:07:35.023 [2024-07-25 11:54:12.102083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.102111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.023 #42 NEW cov: 12315 ft: 15639 corp: 29/2048b lim: 100 exec/s: 42 rss: 74Mb L: 32/99 MS: 1 EraseBytes- 00:07:35.023 [2024-07-25 11:54:12.152638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.152666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.152704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.152719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.152771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.152786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.152836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.152851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.023 #43 NEW cov: 12315 ft: 15655 corp: 30/2147b lim: 100 exec/s: 43 rss: 74Mb L: 99/99 MS: 1 PersAutoDict- DE: "\005\000\000\000"- 00:07:35.023 [2024-07-25 11:54:12.192879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029642160052596591 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.192907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.192955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.192971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.193023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.193039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.193089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2097865012304223517 len:7454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.193105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.193157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:8029759183645405039 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.193172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:35.023 #44 NEW cov: 12315 ft: 15691 corp: 31/2247b lim: 100 exec/s: 44 rss: 74Mb L: 100/100 MS: 1 PersAutoDict- DE: "\005\000\000\000"- 00:07:35.023 [2024-07-25 11:54:12.232913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.232941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.232979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28417 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.232994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.233046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.233062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.023 [2024-07-25 11:54:12.233113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28522 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.233130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.023 #45 NEW cov: 12315 ft: 15712 corp: 32/2340b lim: 100 exec/s: 45 rss: 74Mb L: 93/100 MS: 1 ChangeBinInt- 00:07:35.023 [2024-07-25 11:54:12.283040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.023 [2024-07-25 11:54:12.283067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.024 [2024-07-25 11:54:12.283110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.024 [2024-07-25 11:54:12.283126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.024 [2024-07-25 11:54:12.283179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.024 [2024-07-25 11:54:12.283195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.024 [2024-07-25 11:54:12.283248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.024 [2024-07-25 11:54:12.283264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.024 #46 NEW cov: 12315 ft: 15721 corp: 33/2427b lim: 100 exec/s: 46 rss: 74Mb L: 87/100 MS: 1 ShuffleBytes- 00:07:35.024 [2024-07-25 11:54:12.322977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.024 [2024-07-25 11:54:12.323006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.024 [2024-07-25 11:54:12.323041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:751942187195789167 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.024 [2024-07-25 11:54:12.323057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.024 [2024-07-25 11:54:12.323109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.024 [2024-07-25 11:54:12.323124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.284 #47 NEW cov: 12315 ft: 15730 corp: 34/2494b lim: 100 exec/s: 47 rss: 74Mb L: 67/100 MS: 1 ChangeByte- 00:07:35.284 [2024-07-25 11:54:12.362930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.362957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.362999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.363018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.284 #48 NEW cov: 12315 ft: 15786 corp: 35/2539b lim: 100 exec/s: 48 rss: 74Mb L: 45/100 MS: 1 CrossOver- 00:07:35.284 [2024-07-25 11:54:12.413331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.413358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.413403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.413419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.413470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.413486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.413540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8029759185026510703 len:28528 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.413556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.284 #49 NEW cov: 12315 ft: 15797 corp: 36/2635b lim: 100 exec/s: 49 rss: 74Mb L: 96/100 MS: 1 CopyPart- 00:07:35.284 [2024-07-25 11:54:12.453024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8029759185026510703 len:28526 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.453052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.284 #50 NEW cov: 12315 ft: 15805 corp: 37/2667b lim: 100 exec/s: 50 rss: 74Mb L: 32/100 MS: 1 ChangeBit- 00:07:35.284 [2024-07-25 11:54:12.503722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.503755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.503803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.503820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.503871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.503886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.503937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.503952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.284 [2024-07-25 11:54:12.504004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:28823037608787968 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.284 [2024-07-25 11:54:12.504020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:35.284 #51 NEW cov: 12315 ft: 15818 corp: 38/2767b lim: 100 exec/s: 25 rss: 75Mb L: 100/100 MS: 1 CopyPart- 00:07:35.284 #51 DONE cov: 12315 ft: 15818 corp: 38/2767b lim: 100 exec/s: 25 rss: 75Mb 00:07:35.284 ###### Recommended dictionary. ###### 00:07:35.284 "\005\000\000\000" # Uses: 3 00:07:35.284 ###### End of recommended dictionary. ###### 00:07:35.284 Done 51 runs in 2 second(s) 00:07:35.543 11:54:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:35.543 11:54:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.543 11:54:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.543 11:54:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:35.543 00:07:35.543 real 1m6.228s 00:07:35.543 user 1m40.771s 00:07:35.543 sys 0m8.615s 00:07:35.543 11:54:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.543 11:54:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:35.543 ************************************ 00:07:35.543 END TEST nvmf_llvm_fuzz 00:07:35.543 ************************************ 00:07:35.543 11:54:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:35.543 11:54:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:35.543 11:54:12 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:35.543 11:54:12 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.543 11:54:12 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.543 11:54:12 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:35.543 ************************************ 00:07:35.543 START TEST vfio_llvm_fuzz 00:07:35.543 ************************************ 00:07:35.543 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:35.805 * Looking for test storage... 00:07:35.805 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:35.805 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:35.806 #define SPDK_CONFIG_H 00:07:35.806 #define SPDK_CONFIG_APPS 1 00:07:35.806 #define SPDK_CONFIG_ARCH native 00:07:35.806 #undef SPDK_CONFIG_ASAN 00:07:35.806 #undef SPDK_CONFIG_AVAHI 00:07:35.806 #undef SPDK_CONFIG_CET 00:07:35.806 #define SPDK_CONFIG_COVERAGE 1 00:07:35.806 #define SPDK_CONFIG_CROSS_PREFIX 00:07:35.806 #undef SPDK_CONFIG_CRYPTO 00:07:35.806 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:35.806 #undef SPDK_CONFIG_CUSTOMOCF 00:07:35.806 #undef SPDK_CONFIG_DAOS 00:07:35.806 #define SPDK_CONFIG_DAOS_DIR 00:07:35.806 #define SPDK_CONFIG_DEBUG 1 00:07:35.806 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:35.806 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:35.806 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:35.806 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:35.806 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:35.806 #undef SPDK_CONFIG_DPDK_UADK 00:07:35.806 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:35.806 #define SPDK_CONFIG_EXAMPLES 1 00:07:35.806 #undef SPDK_CONFIG_FC 00:07:35.806 #define SPDK_CONFIG_FC_PATH 00:07:35.806 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:35.806 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:35.806 #undef SPDK_CONFIG_FUSE 00:07:35.806 #define SPDK_CONFIG_FUZZER 1 00:07:35.806 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:35.806 #undef SPDK_CONFIG_GOLANG 00:07:35.806 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:35.806 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:35.806 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:35.806 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:35.806 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:35.806 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:35.806 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:35.806 #define SPDK_CONFIG_IDXD 1 00:07:35.806 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:35.806 #undef SPDK_CONFIG_IPSEC_MB 00:07:35.806 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:35.806 #define SPDK_CONFIG_ISAL 1 00:07:35.806 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:35.806 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:35.806 #define SPDK_CONFIG_LIBDIR 00:07:35.806 #undef SPDK_CONFIG_LTO 00:07:35.806 #define SPDK_CONFIG_MAX_LCORES 128 00:07:35.806 #define SPDK_CONFIG_NVME_CUSE 1 00:07:35.806 #undef SPDK_CONFIG_OCF 00:07:35.806 #define SPDK_CONFIG_OCF_PATH 00:07:35.806 #define SPDK_CONFIG_OPENSSL_PATH 00:07:35.806 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:35.806 #define SPDK_CONFIG_PGO_DIR 00:07:35.806 #undef SPDK_CONFIG_PGO_USE 00:07:35.806 #define SPDK_CONFIG_PREFIX /usr/local 00:07:35.806 #undef SPDK_CONFIG_RAID5F 00:07:35.806 #undef SPDK_CONFIG_RBD 00:07:35.806 #define SPDK_CONFIG_RDMA 1 00:07:35.806 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:35.806 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:35.806 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:35.806 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:35.806 #undef SPDK_CONFIG_SHARED 00:07:35.806 #undef SPDK_CONFIG_SMA 00:07:35.806 #define SPDK_CONFIG_TESTS 1 00:07:35.806 #undef SPDK_CONFIG_TSAN 00:07:35.806 #define SPDK_CONFIG_UBLK 1 00:07:35.806 #define SPDK_CONFIG_UBSAN 1 00:07:35.806 #undef SPDK_CONFIG_UNIT_TESTS 00:07:35.806 #undef SPDK_CONFIG_URING 00:07:35.806 #define SPDK_CONFIG_URING_PATH 00:07:35.806 #undef SPDK_CONFIG_URING_ZNS 00:07:35.806 #undef SPDK_CONFIG_USDT 00:07:35.806 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:35.806 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:35.806 #define SPDK_CONFIG_VFIO_USER 1 00:07:35.806 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:35.806 #define SPDK_CONFIG_VHOST 1 00:07:35.806 #define SPDK_CONFIG_VIRTIO 1 00:07:35.806 #undef SPDK_CONFIG_VTUNE 00:07:35.806 #define SPDK_CONFIG_VTUNE_DIR 00:07:35.806 #define SPDK_CONFIG_WERROR 1 00:07:35.806 #define SPDK_CONFIG_WPDK_DIR 00:07:35.806 #undef SPDK_CONFIG_XNVME 00:07:35.806 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:35.806 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:35.807 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:07:35.808 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j72 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 911557 ]] 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 911557 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.FCL7rg 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:07:35.809 11:54:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.FCL7rg/tests/vfio /tmp/spdk.FCL7rg 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=945618944 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4338810880 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=50327367680 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742534656 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=11415166976 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30866554880 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871265280 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4710400 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342714368 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348510208 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=5795840 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870933504 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871269376 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=335872 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174248960 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174253056 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:07:35.809 * Looking for test storage... 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=50327367680 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=13629759488 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:35.809 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:35.809 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:35.810 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.810 11:54:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:35.810 [2024-07-25 11:54:13.093768] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:35.810 [2024-07-25 11:54:13.093851] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911609 ] 00:07:36.069 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.069 [2024-07-25 11:54:13.184413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.069 [2024-07-25 11:54:13.267368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.327 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.327 INFO: Seed: 660538115 00:07:36.327 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:07:36.327 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:07:36.327 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:36.327 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.327 #2 INITED exec/s: 0 rss: 66Mb 00:07:36.327 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.327 This may also happen if the target rejected all inputs we tried so far 00:07:36.327 [2024-07-25 11:54:13.523411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:36.844 NEW_FUNC[1/659]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:36.844 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.844 #20 NEW cov: 10983 ft: 10508 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:36.844 #36 NEW cov: 10998 ft: 13467 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:37.102 #37 NEW cov: 10998 ft: 13836 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:07:37.102 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:37.102 #38 NEW cov: 11022 ft: 15375 corp: 5/25b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:07:37.360 #41 NEW cov: 11022 ft: 16465 corp: 6/31b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 3 InsertRepeatedBytes-ChangeByte-CrossOver- 00:07:37.360 #42 NEW cov: 11022 ft: 16649 corp: 7/37b lim: 6 exec/s: 42 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:07:37.619 #43 NEW cov: 11024 ft: 17172 corp: 8/43b lim: 6 exec/s: 43 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:07:37.619 [2024-07-25 11:54:14.753394] ctrlr.c:1592:nvmf_property_set: *ERROR*: prop set_cb failed 00:07:37.619 NEW_FUNC[1/1]: 0x11da680 in nvmf_prop_get_aqa /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1360 00:07:37.619 #44 NEW cov: 11037 ft: 17483 corp: 9/49b lim: 6 exec/s: 44 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:37.619 [2024-07-25 11:54:14.889466] ctrlr.c:1592:nvmf_property_set: *ERROR*: prop set_cb failed 00:07:37.876 #45 NEW cov: 11037 ft: 17587 corp: 10/55b lim: 6 exec/s: 45 rss: 75Mb L: 6/6 MS: 1 CrossOver- 00:07:37.876 #46 NEW cov: 11037 ft: 17927 corp: 11/61b lim: 6 exec/s: 46 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:07:38.136 #47 NEW cov: 11037 ft: 18002 corp: 12/67b lim: 6 exec/s: 47 rss: 75Mb L: 6/6 MS: 1 CopyPart- 00:07:38.136 #48 NEW cov: 11044 ft: 18339 corp: 13/73b lim: 6 exec/s: 48 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:38.136 #54 NEW cov: 11044 ft: 18385 corp: 14/79b lim: 6 exec/s: 54 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:38.395 [2024-07-25 11:54:15.486648] ctrlr.c:1592:nvmf_property_set: *ERROR*: prop set_cb failed 00:07:38.395 #55 NEW cov: 11044 ft: 18646 corp: 15/85b lim: 6 exec/s: 27 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:38.395 #55 DONE cov: 11044 ft: 18646 corp: 15/85b lim: 6 exec/s: 27 rss: 75Mb 00:07:38.395 Done 55 runs in 2 second(s) 00:07:38.395 [2024-07-25 11:54:15.588935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:38.654 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:38.654 11:54:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:38.654 [2024-07-25 11:54:15.904482] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:38.654 [2024-07-25 11:54:15.904557] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911975 ] 00:07:38.654 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.914 [2024-07-25 11:54:15.991372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.914 [2024-07-25 11:54:16.074259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.172 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.172 INFO: Seed: 3474533822 00:07:39.173 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:07:39.173 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:07:39.173 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:39.173 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.173 #2 INITED exec/s: 0 rss: 66Mb 00:07:39.173 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.173 This may also happen if the target rejected all inputs we tried so far 00:07:39.173 [2024-07-25 11:54:16.341526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:39.173 [2024-07-25 11:54:16.417654] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:39.173 [2024-07-25 11:54:16.417682] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:39.173 [2024-07-25 11:54:16.417700] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:39.690 NEW_FUNC[1/661]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:39.690 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:39.690 #17 NEW cov: 10963 ft: 10587 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 5 CrossOver-EraseBytes-ShuffleBytes-InsertByte-CMP- DE: "\020\000"- 00:07:39.690 [2024-07-25 11:54:16.919114] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:39.690 [2024-07-25 11:54:16.919151] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:39.690 [2024-07-25 11:54:16.919169] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:39.949 #18 NEW cov: 10993 ft: 14355 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:07:39.949 [2024-07-25 11:54:17.129810] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:39.949 [2024-07-25 11:54:17.129835] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:39.949 [2024-07-25 11:54:17.129853] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:39.949 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:39.949 #19 NEW cov: 11010 ft: 16227 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:40.207 [2024-07-25 11:54:17.329448] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:40.207 [2024-07-25 11:54:17.329472] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:40.207 [2024-07-25 11:54:17.329489] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:40.207 #25 NEW cov: 11010 ft: 16836 corp: 5/17b lim: 4 exec/s: 25 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:40.466 [2024-07-25 11:54:17.519406] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:40.466 [2024-07-25 11:54:17.519429] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:40.466 [2024-07-25 11:54:17.519446] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:40.466 #26 NEW cov: 11010 ft: 17556 corp: 6/21b lim: 4 exec/s: 26 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:40.466 [2024-07-25 11:54:17.708343] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:40.466 [2024-07-25 11:54:17.708366] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:40.466 [2024-07-25 11:54:17.708384] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:40.725 #27 NEW cov: 11010 ft: 17644 corp: 7/25b lim: 4 exec/s: 27 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:07:40.725 [2024-07-25 11:54:17.895242] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:40.725 [2024-07-25 11:54:17.895265] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:40.725 [2024-07-25 11:54:17.895281] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:40.725 #28 NEW cov: 11010 ft: 17999 corp: 8/29b lim: 4 exec/s: 28 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:40.984 [2024-07-25 11:54:18.089029] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:40.984 [2024-07-25 11:54:18.089055] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:40.984 [2024-07-25 11:54:18.089073] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:40.984 #29 NEW cov: 11017 ft: 18519 corp: 9/33b lim: 4 exec/s: 29 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:41.244 [2024-07-25 11:54:18.290821] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:41.244 [2024-07-25 11:54:18.290846] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:41.244 [2024-07-25 11:54:18.290863] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:41.244 #34 NEW cov: 11017 ft: 18639 corp: 10/37b lim: 4 exec/s: 17 rss: 75Mb L: 4/4 MS: 5 EraseBytes-ChangeByte-ChangeByte-CopyPart-PersAutoDict- DE: "\020\000"- 00:07:41.244 #34 DONE cov: 11017 ft: 18639 corp: 10/37b lim: 4 exec/s: 17 rss: 75Mb 00:07:41.244 ###### Recommended dictionary. ###### 00:07:41.244 "\020\000" # Uses: 1 00:07:41.244 ###### End of recommended dictionary. ###### 00:07:41.244 Done 34 runs in 2 second(s) 00:07:41.244 [2024-07-25 11:54:18.425961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:41.503 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:41.503 11:54:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:41.503 [2024-07-25 11:54:18.730507] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:41.503 [2024-07-25 11:54:18.730572] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912350 ] 00:07:41.503 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.503 [2024-07-25 11:54:18.799316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.762 [2024-07-25 11:54:18.881885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.021 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.021 INFO: Seed: 1984567056 00:07:42.021 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:07:42.021 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:07:42.021 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:42.021 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.021 #2 INITED exec/s: 0 rss: 66Mb 00:07:42.021 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.021 This may also happen if the target rejected all inputs we tried so far 00:07:42.021 [2024-07-25 11:54:19.142774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:42.021 [2024-07-25 11:54:19.220262] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:42.539 NEW_FUNC[1/660]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:42.539 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:42.539 #7 NEW cov: 10958 ft: 10718 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 5 ChangeByte-InsertRepeatedBytes-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:42.539 [2024-07-25 11:54:19.731156] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:42.539 #8 NEW cov: 10973 ft: 14162 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:42.798 [2024-07-25 11:54:19.926833] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:42.798 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:42.798 #10 NEW cov: 10993 ft: 16246 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 EraseBytes-CopyPart- 00:07:43.056 [2024-07-25 11:54:20.126184] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:43.056 #12 NEW cov: 10993 ft: 16927 corp: 5/33b lim: 8 exec/s: 12 rss: 75Mb L: 8/8 MS: 2 CrossOver-CopyPart- 00:07:43.056 [2024-07-25 11:54:20.330688] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:43.315 #13 NEW cov: 10993 ft: 17273 corp: 6/41b lim: 8 exec/s: 13 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:43.315 [2024-07-25 11:54:20.522793] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:43.573 #24 NEW cov: 10993 ft: 17429 corp: 7/49b lim: 8 exec/s: 24 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:07:43.573 [2024-07-25 11:54:20.707555] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:43.573 #25 NEW cov: 10993 ft: 18060 corp: 8/57b lim: 8 exec/s: 25 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:43.831 [2024-07-25 11:54:20.902574] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:43.832 #26 NEW cov: 11000 ft: 18290 corp: 9/65b lim: 8 exec/s: 26 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:07:43.832 [2024-07-25 11:54:21.094019] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:44.090 #37 NEW cov: 11000 ft: 18348 corp: 10/73b lim: 8 exec/s: 18 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:44.090 #37 DONE cov: 11000 ft: 18348 corp: 10/73b lim: 8 exec/s: 18 rss: 75Mb 00:07:44.090 Done 37 runs in 2 second(s) 00:07:44.090 [2024-07-25 11:54:21.219950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:44.349 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:44.349 11:54:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:44.349 [2024-07-25 11:54:21.535892] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:44.349 [2024-07-25 11:54:21.535966] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912724 ] 00:07:44.349 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.349 [2024-07-25 11:54:21.622318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.608 [2024-07-25 11:54:21.703664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.608 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.608 INFO: Seed: 504625526 00:07:44.867 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:07:44.867 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:07:44.867 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:44.867 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.867 #2 INITED exec/s: 0 rss: 66Mb 00:07:44.867 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.867 This may also happen if the target rejected all inputs we tried so far 00:07:44.867 [2024-07-25 11:54:21.957864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:45.435 NEW_FUNC[1/660]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:45.435 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:45.435 #116 NEW cov: 10970 ft: 10833 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 CopyPart-InsertRepeatedBytes-ChangeBit-InsertRepeatedBytes- 00:07:45.435 #117 NEW cov: 10984 ft: 14164 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:07:45.694 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:45.694 #118 NEW cov: 11001 ft: 16485 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\005\000\000\000"- 00:07:45.952 #119 NEW cov: 11001 ft: 16879 corp: 5/129b lim: 32 exec/s: 119 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:45.952 #120 NEW cov: 11001 ft: 17414 corp: 6/161b lim: 32 exec/s: 120 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:46.218 #131 NEW cov: 11001 ft: 17746 corp: 7/193b lim: 32 exec/s: 131 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:07:46.476 #132 NEW cov: 11001 ft: 17778 corp: 8/225b lim: 32 exec/s: 132 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:46.476 #133 NEW cov: 11008 ft: 17821 corp: 9/257b lim: 32 exec/s: 133 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:46.733 #139 NEW cov: 11008 ft: 17983 corp: 10/289b lim: 32 exec/s: 139 rss: 75Mb L: 32/32 MS: 1 CMP- DE: "\377\377\377s"- 00:07:46.992 #140 NEW cov: 11008 ft: 18038 corp: 11/321b lim: 32 exec/s: 70 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:46.992 #140 DONE cov: 11008 ft: 18038 corp: 11/321b lim: 32 exec/s: 70 rss: 75Mb 00:07:46.992 ###### Recommended dictionary. ###### 00:07:46.992 "\005\000\000\000" # Uses: 2 00:07:46.992 "\377\377\377s" # Uses: 0 00:07:46.992 ###### End of recommended dictionary. ###### 00:07:46.992 Done 140 runs in 2 second(s) 00:07:46.992 [2024-07-25 11:54:24.125934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:47.251 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:47.251 11:54:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:47.251 [2024-07-25 11:54:24.432504] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:47.251 [2024-07-25 11:54:24.432580] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913100 ] 00:07:47.251 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.251 [2024-07-25 11:54:24.519077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.510 [2024-07-25 11:54:24.603859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.510 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.510 INFO: Seed: 3413615506 00:07:47.769 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:07:47.769 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:07:47.769 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:47.769 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.769 #2 INITED exec/s: 0 rss: 66Mb 00:07:47.769 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.769 This may also happen if the target rejected all inputs we tried so far 00:07:47.769 [2024-07-25 11:54:24.871298] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:48.336 NEW_FUNC[1/660]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:48.336 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:48.336 #52 NEW cov: 10970 ft: 10767 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 InsertRepeatedBytes-InsertRepeatedBytes-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:48.336 #58 NEW cov: 10986 ft: 14186 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:48.598 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:48.598 #64 NEW cov: 11003 ft: 15570 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:48.877 #75 NEW cov: 11003 ft: 16684 corp: 5/129b lim: 32 exec/s: 75 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:48.877 #76 NEW cov: 11003 ft: 17040 corp: 6/161b lim: 32 exec/s: 76 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:49.135 #77 NEW cov: 11003 ft: 17422 corp: 7/193b lim: 32 exec/s: 77 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:49.395 #83 NEW cov: 11003 ft: 17758 corp: 8/225b lim: 32 exec/s: 83 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:49.654 #89 NEW cov: 11010 ft: 18247 corp: 9/257b lim: 32 exec/s: 89 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:49.913 #90 NEW cov: 11010 ft: 18487 corp: 10/289b lim: 32 exec/s: 45 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:49.913 #90 DONE cov: 11010 ft: 18487 corp: 10/289b lim: 32 exec/s: 45 rss: 74Mb 00:07:49.913 Done 90 runs in 2 second(s) 00:07:49.913 [2024-07-25 11:54:26.990951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:50.174 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:50.174 11:54:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:50.174 [2024-07-25 11:54:27.306145] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:50.174 [2024-07-25 11:54:27.306240] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913481 ] 00:07:50.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.174 [2024-07-25 11:54:27.394994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.433 [2024-07-25 11:54:27.478941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.433 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.433 INFO: Seed: 1996636508 00:07:50.433 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:07:50.433 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:07:50.433 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:50.433 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.433 #2 INITED exec/s: 0 rss: 66Mb 00:07:50.433 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.433 This may also happen if the target rejected all inputs we tried so far 00:07:50.702 [2024-07-25 11:54:27.748769] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:50.702 [2024-07-25 11:54:27.825593] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:50.702 [2024-07-25 11:54:27.825629] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:50.967 NEW_FUNC[1/661]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:50.967 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:50.967 #62 NEW cov: 10981 ft: 10695 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 5 InsertByte-InsertRepeatedBytes-ChangeBinInt-ShuffleBytes-CopyPart- 00:07:51.225 [2024-07-25 11:54:28.329626] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.225 [2024-07-25 11:54:28.329668] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.226 #63 NEW cov: 10995 ft: 14304 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CrossOver- 00:07:51.485 [2024-07-25 11:54:28.532780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.485 [2024-07-25 11:54:28.532811] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.485 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:51.485 #64 NEW cov: 11012 ft: 16417 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:07:51.485 [2024-07-25 11:54:28.748470] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.485 [2024-07-25 11:54:28.748502] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:51.744 #65 NEW cov: 11012 ft: 16978 corp: 5/53b lim: 13 exec/s: 65 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:51.744 [2024-07-25 11:54:28.948618] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:51.744 [2024-07-25 11:54:28.948654] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:52.003 #71 NEW cov: 11012 ft: 17748 corp: 6/66b lim: 13 exec/s: 71 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:52.003 [2024-07-25 11:54:29.145131] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:52.003 [2024-07-25 11:54:29.145164] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:52.003 #72 NEW cov: 11012 ft: 17821 corp: 7/79b lim: 13 exec/s: 72 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:52.263 [2024-07-25 11:54:29.352054] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:52.263 [2024-07-25 11:54:29.352086] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:52.263 #73 NEW cov: 11012 ft: 18102 corp: 8/92b lim: 13 exec/s: 73 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:52.263 [2024-07-25 11:54:29.549051] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:52.263 [2024-07-25 11:54:29.549081] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:52.522 #74 NEW cov: 11019 ft: 18427 corp: 9/105b lim: 13 exec/s: 74 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "\201\000\000\000\000\000\000\000"- 00:07:52.522 [2024-07-25 11:54:29.767523] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:52.522 [2024-07-25 11:54:29.767553] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:52.781 #75 NEW cov: 11019 ft: 18566 corp: 10/118b lim: 13 exec/s: 37 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:52.781 #75 DONE cov: 11019 ft: 18566 corp: 10/118b lim: 13 exec/s: 37 rss: 74Mb 00:07:52.781 ###### Recommended dictionary. ###### 00:07:52.781 "\201\000\000\000\000\000\000\000" # Uses: 0 00:07:52.781 ###### End of recommended dictionary. ###### 00:07:52.781 Done 75 runs in 2 second(s) 00:07:52.781 [2024-07-25 11:54:29.905958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:53.041 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:53.041 11:54:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:53.041 [2024-07-25 11:54:30.222362] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:53.041 [2024-07-25 11:54:30.222436] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913918 ] 00:07:53.041 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.041 [2024-07-25 11:54:30.311505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.300 [2024-07-25 11:54:30.392320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.300 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.300 INFO: Seed: 606692331 00:07:53.560 INFO: Loaded 1 modules (356297 inline 8-bit counters): 356297 [0x2987e8c, 0x29dee55), 00:07:53.560 INFO: Loaded 1 PC tables (356297 PCs): 356297 [0x29dee58,0x2f4eae8), 00:07:53.560 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:53.560 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.560 #2 INITED exec/s: 0 rss: 66Mb 00:07:53.560 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.560 This may also happen if the target rejected all inputs we tried so far 00:07:53.560 [2024-07-25 11:54:30.657026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:53.560 [2024-07-25 11:54:30.728843] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:53.560 [2024-07-25 11:54:30.728876] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.127 NEW_FUNC[1/661]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:54.127 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:54.127 #6 NEW cov: 10969 ft: 10816 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 4 ChangeByte-InsertRepeatedBytes-ChangeBinInt-InsertByte- 00:07:54.127 [2024-07-25 11:54:31.233339] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.127 [2024-07-25 11:54:31.233386] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.127 #16 NEW cov: 10983 ft: 14417 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 EraseBytes-CopyPart-ChangeBit-CMP-InsertByte- DE: "4\000\000\000\000\000\000\000"- 00:07:54.127 [2024-07-25 11:54:31.425832] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.127 [2024-07-25 11:54:31.425865] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.386 NEW_FUNC[1/1]: 0x1a56580 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:54.386 #23 NEW cov: 11000 ft: 15814 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 EraseBytes-CopyPart- 00:07:54.386 [2024-07-25 11:54:31.635939] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.386 [2024-07-25 11:54:31.635972] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.645 #24 NEW cov: 11000 ft: 16151 corp: 5/37b lim: 9 exec/s: 24 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:54.645 [2024-07-25 11:54:31.832397] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.645 [2024-07-25 11:54:31.832429] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.905 #25 NEW cov: 11000 ft: 16365 corp: 6/46b lim: 9 exec/s: 25 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:54.905 [2024-07-25 11:54:32.034501] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:54.905 [2024-07-25 11:54:32.034531] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:54.905 #26 NEW cov: 11000 ft: 16518 corp: 7/55b lim: 9 exec/s: 26 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:07:55.164 [2024-07-25 11:54:32.236027] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:55.164 [2024-07-25 11:54:32.236058] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:55.164 #28 NEW cov: 11000 ft: 16916 corp: 8/64b lim: 9 exec/s: 28 rss: 74Mb L: 9/9 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:55.164 [2024-07-25 11:54:32.437689] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:55.164 [2024-07-25 11:54:32.437719] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:55.422 #29 NEW cov: 11007 ft: 17497 corp: 9/73b lim: 9 exec/s: 29 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:55.423 [2024-07-25 11:54:32.628011] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:55.423 [2024-07-25 11:54:32.628043] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:55.682 #30 NEW cov: 11007 ft: 17547 corp: 10/82b lim: 9 exec/s: 15 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:55.682 #30 DONE cov: 11007 ft: 17547 corp: 10/82b lim: 9 exec/s: 15 rss: 74Mb 00:07:55.682 ###### Recommended dictionary. ###### 00:07:55.682 "4\000\000\000\000\000\000\000" # Uses: 0 00:07:55.682 ###### End of recommended dictionary. ###### 00:07:55.682 Done 30 runs in 2 second(s) 00:07:55.682 [2024-07-25 11:54:32.761951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:55.941 11:54:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:55.941 11:54:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:55.941 11:54:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.941 11:54:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:55.941 00:07:55.941 real 0m20.272s 00:07:55.941 user 0m27.998s 00:07:55.941 sys 0m2.079s 00:07:55.941 11:54:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.941 11:54:33 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:55.941 ************************************ 00:07:55.941 END TEST vfio_llvm_fuzz 00:07:55.941 ************************************ 00:07:55.941 11:54:33 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:55.941 00:07:55.941 real 1m26.782s 00:07:55.941 user 2m8.881s 00:07:55.941 sys 0m10.888s 00:07:55.941 11:54:33 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.941 11:54:33 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:55.941 ************************************ 00:07:55.941 END TEST llvm_fuzz 00:07:55.941 ************************************ 00:07:55.941 11:54:33 -- spdk/autotest.sh@382 -- # trap - SIGINT SIGTERM EXIT 00:07:55.941 11:54:33 -- spdk/autotest.sh@384 -- # timing_enter post_cleanup 00:07:55.941 11:54:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.941 11:54:33 -- common/autotest_common.sh@10 -- # set +x 00:07:55.941 11:54:33 -- spdk/autotest.sh@385 -- # autotest_cleanup 00:07:55.941 11:54:33 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:55.941 11:54:33 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:55.941 11:54:33 -- common/autotest_common.sh@10 -- # set +x 00:08:01.215 INFO: APP EXITING 00:08:01.215 INFO: killing all VMs 00:08:01.215 INFO: killing vhost app 00:08:01.215 WARN: no vhost pid file found 00:08:01.215 INFO: EXIT DONE 00:08:04.507 Waiting for block devices as requested 00:08:04.507 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:08:04.507 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:08:04.507 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:04.507 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:04.507 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:04.507 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:04.507 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:04.766 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:04.766 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:04.766 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:05.026 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:08:05.026 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:05.026 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:05.285 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:05.285 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:05.285 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:05.545 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:05.545 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:05.545 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:09.796 Cleaning 00:08:09.796 Removing: /dev/shm/spdk_tgt_trace.pid890557 00:08:09.796 Removing: /var/run/dpdk/spdk_pid890057 00:08:09.796 Removing: /var/run/dpdk/spdk_pid890557 00:08:09.796 Removing: /var/run/dpdk/spdk_pid891098 00:08:09.796 Removing: /var/run/dpdk/spdk_pid891854 00:08:09.796 Removing: /var/run/dpdk/spdk_pid892050 00:08:09.796 Removing: /var/run/dpdk/spdk_pid892829 00:08:09.796 Removing: /var/run/dpdk/spdk_pid893014 00:08:09.796 Removing: /var/run/dpdk/spdk_pid893336 00:08:09.796 Removing: /var/run/dpdk/spdk_pid893575 00:08:09.796 Removing: /var/run/dpdk/spdk_pid893818 00:08:09.796 Removing: /var/run/dpdk/spdk_pid894072 00:08:09.796 Removing: /var/run/dpdk/spdk_pid894368 00:08:09.796 Removing: /var/run/dpdk/spdk_pid894559 00:08:09.796 Removing: /var/run/dpdk/spdk_pid894738 00:08:09.796 Removing: /var/run/dpdk/spdk_pid895004 00:08:09.796 Removing: /var/run/dpdk/spdk_pid895728 00:08:09.796 Removing: /var/run/dpdk/spdk_pid898151 00:08:09.796 Removing: /var/run/dpdk/spdk_pid898376 00:08:09.796 Removing: /var/run/dpdk/spdk_pid898724 00:08:09.796 Removing: /var/run/dpdk/spdk_pid898766 00:08:09.796 Removing: /var/run/dpdk/spdk_pid899173 00:08:09.796 Removing: /var/run/dpdk/spdk_pid899351 00:08:09.796 Removing: /var/run/dpdk/spdk_pid899753 00:08:09.796 Removing: /var/run/dpdk/spdk_pid899935 00:08:09.796 Removing: /var/run/dpdk/spdk_pid900152 00:08:09.796 Removing: /var/run/dpdk/spdk_pid900330 00:08:09.796 Removing: /var/run/dpdk/spdk_pid900476 00:08:09.796 Removing: /var/run/dpdk/spdk_pid900562 00:08:09.796 Removing: /var/run/dpdk/spdk_pid901023 00:08:09.796 Removing: /var/run/dpdk/spdk_pid901224 00:08:09.796 Removing: /var/run/dpdk/spdk_pid901424 00:08:09.796 Removing: /var/run/dpdk/spdk_pid901590 00:08:09.796 Removing: /var/run/dpdk/spdk_pid902062 00:08:09.796 Removing: /var/run/dpdk/spdk_pid902427 00:08:09.796 Removing: /var/run/dpdk/spdk_pid902799 00:08:09.796 Removing: /var/run/dpdk/spdk_pid903168 00:08:09.796 Removing: /var/run/dpdk/spdk_pid903543 00:08:09.796 Removing: /var/run/dpdk/spdk_pid903918 00:08:09.796 Removing: /var/run/dpdk/spdk_pid904284 00:08:09.796 Removing: /var/run/dpdk/spdk_pid904659 00:08:09.796 Removing: /var/run/dpdk/spdk_pid905034 00:08:09.796 Removing: /var/run/dpdk/spdk_pid905400 00:08:09.796 Removing: /var/run/dpdk/spdk_pid905771 00:08:09.796 Removing: /var/run/dpdk/spdk_pid906151 00:08:09.796 Removing: /var/run/dpdk/spdk_pid906434 00:08:09.796 Removing: /var/run/dpdk/spdk_pid906725 00:08:09.796 Removing: /var/run/dpdk/spdk_pid907093 00:08:09.796 Removing: /var/run/dpdk/spdk_pid907464 00:08:09.796 Removing: /var/run/dpdk/spdk_pid907831 00:08:09.796 Removing: /var/run/dpdk/spdk_pid908203 00:08:09.796 Removing: /var/run/dpdk/spdk_pid908572 00:08:09.796 Removing: /var/run/dpdk/spdk_pid908886 00:08:09.796 Removing: /var/run/dpdk/spdk_pid909229 00:08:09.796 Removing: /var/run/dpdk/spdk_pid909652 00:08:09.796 Removing: /var/run/dpdk/spdk_pid910182 00:08:09.796 Removing: /var/run/dpdk/spdk_pid910787 00:08:09.796 Removing: /var/run/dpdk/spdk_pid911161 00:08:09.796 Removing: /var/run/dpdk/spdk_pid911609 00:08:09.796 Removing: /var/run/dpdk/spdk_pid911975 00:08:09.796 Removing: /var/run/dpdk/spdk_pid912350 00:08:09.796 Removing: /var/run/dpdk/spdk_pid912724 00:08:09.796 Removing: /var/run/dpdk/spdk_pid913100 00:08:09.796 Removing: /var/run/dpdk/spdk_pid913481 00:08:09.796 Removing: /var/run/dpdk/spdk_pid913918 00:08:09.796 Clean 00:08:09.796 11:54:46 -- common/autotest_common.sh@1451 -- # return 0 00:08:09.796 11:54:46 -- spdk/autotest.sh@386 -- # timing_exit post_cleanup 00:08:09.796 11:54:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.796 11:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:09.796 11:54:46 -- spdk/autotest.sh@388 -- # timing_exit autotest 00:08:09.796 11:54:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.796 11:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:09.796 11:54:46 -- spdk/autotest.sh@389 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:09.796 11:54:46 -- spdk/autotest.sh@391 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:09.796 11:54:46 -- spdk/autotest.sh@391 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:09.796 11:54:46 -- spdk/autotest.sh@393 -- # hash lcov 00:08:09.796 11:54:46 -- spdk/autotest.sh@393 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:09.796 11:54:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:09.796 11:54:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:09.796 11:54:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.796 11:54:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.796 11:54:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.796 11:54:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.796 11:54:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.796 11:54:46 -- paths/export.sh@5 -- $ export PATH 00:08:09.796 11:54:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.796 11:54:46 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:09.796 11:54:46 -- common/autobuild_common.sh@447 -- $ date +%s 00:08:09.796 11:54:46 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721901286.XXXXXX 00:08:09.796 11:54:46 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721901286.KnRlfd 00:08:09.796 11:54:46 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:08:09.796 11:54:46 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:08:09.797 11:54:46 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:09.797 11:54:46 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:09.797 11:54:46 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:09.797 11:54:46 -- common/autobuild_common.sh@463 -- $ get_config_params 00:08:09.797 11:54:46 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:08:09.797 11:54:46 -- common/autotest_common.sh@10 -- $ set +x 00:08:09.797 11:54:46 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:09.797 11:54:46 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:08:09.797 11:54:46 -- pm/common@17 -- $ local monitor 00:08:09.797 11:54:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:09.797 11:54:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:09.797 11:54:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:09.797 11:54:46 -- pm/common@21 -- $ date +%s 00:08:09.797 11:54:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:09.797 11:54:46 -- pm/common@21 -- $ date +%s 00:08:09.797 11:54:46 -- pm/common@25 -- $ sleep 1 00:08:09.797 11:54:47 -- pm/common@21 -- $ date +%s 00:08:09.797 11:54:47 -- pm/common@21 -- $ date +%s 00:08:09.797 11:54:47 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721901287 00:08:09.797 11:54:47 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721901287 00:08:09.797 11:54:47 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721901287 00:08:09.797 11:54:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721901287 00:08:09.797 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721901287_collect-vmstat.pm.log 00:08:09.797 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721901287_collect-cpu-load.pm.log 00:08:09.797 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721901287_collect-cpu-temp.pm.log 00:08:09.797 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721901287_collect-bmc-pm.bmc.pm.log 00:08:10.735 11:54:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:08:10.735 11:54:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:08:10.735 11:54:48 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:10.735 11:54:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:10.735 11:54:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:10.735 11:54:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:10.735 11:54:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:10.735 11:54:48 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:10.735 11:54:48 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:10.995 11:54:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:10.995 11:54:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:10.995 11:54:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:10.995 11:54:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:10.995 11:54:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:10.995 11:54:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:10.995 11:54:48 -- pm/common@44 -- $ pid=919645 00:08:10.995 11:54:48 -- pm/common@50 -- $ kill -TERM 919645 00:08:10.995 11:54:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:10.995 11:54:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:10.995 11:54:48 -- pm/common@44 -- $ pid=919648 00:08:10.995 11:54:48 -- pm/common@50 -- $ kill -TERM 919648 00:08:10.995 11:54:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:10.995 11:54:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:10.995 11:54:48 -- pm/common@44 -- $ pid=919649 00:08:10.995 11:54:48 -- pm/common@50 -- $ kill -TERM 919649 00:08:10.995 11:54:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:10.995 11:54:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:10.995 11:54:48 -- pm/common@44 -- $ pid=919684 00:08:10.995 11:54:48 -- pm/common@50 -- $ sudo -E kill -TERM 919684 00:08:10.995 + [[ -n 789300 ]] 00:08:10.995 + sudo kill 789300 00:08:11.004 [Pipeline] } 00:08:11.024 [Pipeline] // stage 00:08:11.030 [Pipeline] } 00:08:11.049 [Pipeline] // timeout 00:08:11.055 [Pipeline] } 00:08:11.070 [Pipeline] // catchError 00:08:11.075 [Pipeline] } 00:08:11.097 [Pipeline] // wrap 00:08:11.102 [Pipeline] } 00:08:11.115 [Pipeline] // catchError 00:08:11.124 [Pipeline] stage 00:08:11.125 [Pipeline] { (Epilogue) 00:08:11.134 [Pipeline] catchError 00:08:11.135 [Pipeline] { 00:08:11.144 [Pipeline] echo 00:08:11.145 Cleanup processes 00:08:11.149 [Pipeline] sh 00:08:11.428 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:11.428 919808 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:11.428 920463 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:11.443 [Pipeline] sh 00:08:11.782 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:11.782 ++ grep -v 'sudo pgrep' 00:08:11.782 ++ awk '{print $1}' 00:08:11.782 + sudo kill -9 919808 00:08:11.794 [Pipeline] sh 00:08:12.076 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:13.024 [Pipeline] sh 00:08:13.309 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:13.309 Artifacts sizes are good 00:08:13.323 [Pipeline] archiveArtifacts 00:08:13.329 Archiving artifacts 00:08:13.389 [Pipeline] sh 00:08:13.676 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:13.694 [Pipeline] cleanWs 00:08:13.704 [WS-CLEANUP] Deleting project workspace... 00:08:13.704 [WS-CLEANUP] Deferred wipeout is used... 00:08:13.711 [WS-CLEANUP] done 00:08:13.715 [Pipeline] } 00:08:13.738 [Pipeline] // catchError 00:08:13.751 [Pipeline] sh 00:08:14.035 + logger -p user.info -t JENKINS-CI 00:08:14.045 [Pipeline] } 00:08:14.062 [Pipeline] // stage 00:08:14.068 [Pipeline] } 00:08:14.083 [Pipeline] // node 00:08:14.089 [Pipeline] End of Pipeline 00:08:14.125 Finished: SUCCESS