00:00:00.001 Started by upstream project "autotest-per-patch" build number 132545 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.081 > git --version # 'git version 2.39.2' 00:00:00.081 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.106 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.107 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.269 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.285 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.298 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.298 > git config core.sparsecheckout # timeout=10 00:00:02.309 > git read-tree -mu HEAD # timeout=10 00:00:02.324 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.354 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.354 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.543 [Pipeline] Start of Pipeline 00:00:02.554 [Pipeline] library 00:00:02.556 Loading library shm_lib@master 00:00:02.556 Library shm_lib@master is cached. Copying from home. 00:00:02.598 [Pipeline] node 00:00:02.617 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.618 [Pipeline] { 00:00:02.629 [Pipeline] catchError 00:00:02.630 [Pipeline] { 00:00:02.639 [Pipeline] wrap 00:00:02.646 [Pipeline] { 00:00:02.651 [Pipeline] stage 00:00:02.653 [Pipeline] { (Prologue) 00:00:02.957 [Pipeline] sh 00:00:03.236 + logger -p user.info -t JENKINS-CI 00:00:03.254 [Pipeline] echo 00:00:03.256 Node: WFP20 00:00:03.263 [Pipeline] sh 00:00:03.561 [Pipeline] setCustomBuildProperty 00:00:03.574 [Pipeline] echo 00:00:03.576 Cleanup processes 00:00:03.581 [Pipeline] sh 00:00:03.863 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.863 1480073 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.875 [Pipeline] sh 00:00:04.161 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.162 ++ grep -v 'sudo pgrep' 00:00:04.162 ++ awk '{print $1}' 00:00:04.162 + sudo kill -9 00:00:04.162 + true 00:00:04.226 [Pipeline] cleanWs 00:00:04.233 [WS-CLEANUP] Deleting project workspace... 00:00:04.233 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.239 [WS-CLEANUP] done 00:00:04.242 [Pipeline] setCustomBuildProperty 00:00:04.253 [Pipeline] sh 00:00:04.529 + sudo git config --global --replace-all safe.directory '*' 00:00:04.610 [Pipeline] httpRequest 00:00:05.178 [Pipeline] echo 00:00:05.180 Sorcerer 10.211.164.20 is alive 00:00:05.187 [Pipeline] retry 00:00:05.188 [Pipeline] { 00:00:05.198 [Pipeline] httpRequest 00:00:05.201 HttpMethod: GET 00:00:05.201 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.202 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.204 Response Code: HTTP/1.1 200 OK 00:00:05.204 Success: Status code 200 is in the accepted range: 200,404 00:00:05.204 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.938 [Pipeline] } 00:00:05.956 [Pipeline] // retry 00:00:05.964 [Pipeline] sh 00:00:06.244 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.259 [Pipeline] httpRequest 00:00:06.663 [Pipeline] echo 00:00:06.664 Sorcerer 10.211.164.20 is alive 00:00:06.674 [Pipeline] retry 00:00:06.676 [Pipeline] { 00:00:06.687 [Pipeline] httpRequest 00:00:06.691 HttpMethod: GET 00:00:06.692 URL: http://10.211.164.20/packages/spdk_7cc16c9618a336e1f4094f7903a2096bfa7d577f.tar.gz 00:00:06.692 Sending request to url: http://10.211.164.20/packages/spdk_7cc16c9618a336e1f4094f7903a2096bfa7d577f.tar.gz 00:00:06.714 Response Code: HTTP/1.1 200 OK 00:00:06.715 Success: Status code 200 is in the accepted range: 200,404 00:00:06.715 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_7cc16c9618a336e1f4094f7903a2096bfa7d577f.tar.gz 00:01:16.123 [Pipeline] } 00:01:16.144 [Pipeline] // retry 00:01:16.152 [Pipeline] sh 00:01:16.436 + tar --no-same-owner -xf spdk_7cc16c9618a336e1f4094f7903a2096bfa7d577f.tar.gz 00:01:18.980 [Pipeline] sh 00:01:19.261 + git -C spdk log --oneline -n5 00:01:19.261 7cc16c961 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:01:19.261 3c5c3d590 bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:01:19.261 f5304d661 bdev/malloc: Fix unexpected DIF verification error for initial read 00:01:19.261 baa2dd0a5 dif: Set DIF field to 0 explicitly if its check is disabled 00:01:19.261 a91d250fa bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:01:19.271 [Pipeline] } 00:01:19.282 [Pipeline] // stage 00:01:19.290 [Pipeline] stage 00:01:19.293 [Pipeline] { (Prepare) 00:01:19.309 [Pipeline] writeFile 00:01:19.327 [Pipeline] sh 00:01:19.608 + logger -p user.info -t JENKINS-CI 00:01:19.619 [Pipeline] sh 00:01:19.899 + logger -p user.info -t JENKINS-CI 00:01:19.909 [Pipeline] sh 00:01:20.191 + cat autorun-spdk.conf 00:01:20.191 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.191 SPDK_TEST_FUZZER_SHORT=1 00:01:20.191 SPDK_TEST_FUZZER=1 00:01:20.191 SPDK_TEST_SETUP=1 00:01:20.191 SPDK_RUN_UBSAN=1 00:01:20.198 RUN_NIGHTLY=0 00:01:20.201 [Pipeline] readFile 00:01:20.219 [Pipeline] withEnv 00:01:20.220 [Pipeline] { 00:01:20.230 [Pipeline] sh 00:01:20.513 + set -ex 00:01:20.513 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:20.513 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:20.513 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.513 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:20.513 ++ SPDK_TEST_FUZZER=1 00:01:20.513 ++ SPDK_TEST_SETUP=1 00:01:20.513 ++ SPDK_RUN_UBSAN=1 00:01:20.513 ++ RUN_NIGHTLY=0 00:01:20.513 + case $SPDK_TEST_NVMF_NICS in 00:01:20.514 + DRIVERS= 00:01:20.514 + [[ -n '' ]] 00:01:20.514 + exit 0 00:01:20.523 [Pipeline] } 00:01:20.537 [Pipeline] // withEnv 00:01:20.543 [Pipeline] } 00:01:20.557 [Pipeline] // stage 00:01:20.566 [Pipeline] catchError 00:01:20.568 [Pipeline] { 00:01:20.580 [Pipeline] timeout 00:01:20.581 Timeout set to expire in 30 min 00:01:20.582 [Pipeline] { 00:01:20.596 [Pipeline] stage 00:01:20.598 [Pipeline] { (Tests) 00:01:20.611 [Pipeline] sh 00:01:20.895 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:20.895 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:20.895 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:20.895 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:20.895 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:20.895 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:20.895 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:20.895 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:20.895 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:20.895 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:20.895 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:20.895 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:20.895 + source /etc/os-release 00:01:20.895 ++ NAME='Fedora Linux' 00:01:20.895 ++ VERSION='39 (Cloud Edition)' 00:01:20.895 ++ ID=fedora 00:01:20.895 ++ VERSION_ID=39 00:01:20.895 ++ VERSION_CODENAME= 00:01:20.895 ++ PLATFORM_ID=platform:f39 00:01:20.895 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.895 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.895 ++ LOGO=fedora-logo-icon 00:01:20.895 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.895 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.895 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.895 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.895 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.895 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.895 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.895 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.895 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.895 ++ SUPPORT_END=2024-11-12 00:01:20.895 ++ VARIANT='Cloud Edition' 00:01:20.895 ++ VARIANT_ID=cloud 00:01:20.895 + uname -a 00:01:20.895 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.895 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:24.185 Hugepages 00:01:24.185 node hugesize free / total 00:01:24.185 node0 1048576kB 0 / 0 00:01:24.185 node0 2048kB 0 / 0 00:01:24.185 node1 1048576kB 0 / 0 00:01:24.185 node1 2048kB 0 / 0 00:01:24.185 00:01:24.185 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.185 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:24.185 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:24.185 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:24.185 + rm -f /tmp/spdk-ld-path 00:01:24.185 + source autorun-spdk.conf 00:01:24.185 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.185 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:24.185 ++ SPDK_TEST_FUZZER=1 00:01:24.185 ++ SPDK_TEST_SETUP=1 00:01:24.185 ++ SPDK_RUN_UBSAN=1 00:01:24.185 ++ RUN_NIGHTLY=0 00:01:24.185 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.185 + [[ -n '' ]] 00:01:24.185 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:24.185 + for M in /var/spdk/build-*-manifest.txt 00:01:24.185 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:24.185 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:24.185 + for M in /var/spdk/build-*-manifest.txt 00:01:24.185 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.185 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:24.185 + for M in /var/spdk/build-*-manifest.txt 00:01:24.185 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.185 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:24.185 ++ uname 00:01:24.185 + [[ Linux == \L\i\n\u\x ]] 00:01:24.185 + sudo dmesg -T 00:01:24.185 + sudo dmesg --clear 00:01:24.185 + dmesg_pid=1481531 00:01:24.185 + [[ Fedora Linux == FreeBSD ]] 00:01:24.185 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.185 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.185 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.185 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.185 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.185 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.185 + sudo dmesg -Tw 00:01:24.185 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.185 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.185 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.185 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.185 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.185 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.185 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.185 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.185 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:24.185 20:03:36 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.185 20:03:36 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:24.185 20:03:36 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.185 20:03:36 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:24.185 20:03:36 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:24.185 20:03:36 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:24.185 20:03:36 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:24.185 20:03:36 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:24.185 20:03:36 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:24.185 20:03:36 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:24.186 20:03:36 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.186 20:03:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:24.186 20:03:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.186 20:03:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.186 20:03:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.186 20:03:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.186 20:03:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.186 20:03:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.186 20:03:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.186 20:03:36 -- paths/export.sh@5 -- $ export PATH 00:01:24.186 20:03:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.186 20:03:36 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:24.186 20:03:36 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.186 20:03:36 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732647816.XXXXXX 00:01:24.186 20:03:36 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732647816.4d4n6j 00:01:24.186 20:03:36 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.186 20:03:36 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.186 20:03:36 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:24.186 20:03:36 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.186 20:03:36 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.186 20:03:36 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.186 20:03:36 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.186 20:03:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.186 20:03:36 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:24.186 20:03:36 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.186 20:03:36 -- pm/common@17 -- $ local monitor 00:01:24.186 20:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.186 20:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.186 20:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.186 20:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.186 20:03:36 -- pm/common@25 -- $ sleep 1 00:01:24.186 20:03:36 -- pm/common@21 -- $ date +%s 00:01:24.186 20:03:36 -- pm/common@21 -- $ date +%s 00:01:24.186 20:03:36 -- pm/common@21 -- $ date +%s 00:01:24.186 20:03:36 -- pm/common@21 -- $ date +%s 00:01:24.186 20:03:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732647816 00:01:24.186 20:03:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732647816 00:01:24.186 20:03:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732647816 00:01:24.186 20:03:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732647816 00:01:24.186 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732647816_collect-cpu-temp.pm.log 00:01:24.186 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732647816_collect-vmstat.pm.log 00:01:24.186 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732647816_collect-cpu-load.pm.log 00:01:24.186 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732647816_collect-bmc-pm.bmc.pm.log 00:01:25.123 20:03:37 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:25.123 20:03:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.123 20:03:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.123 20:03:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:25.123 20:03:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.123 Tue Nov 26 07:03:37 PM UTC 2024 00:01:25.123 20:03:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.123 v25.01-pre-257-g7cc16c961 00:01:25.123 20:03:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.123 20:03:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.123 20:03:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.123 20:03:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.123 20:03:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.123 20:03:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.123 ************************************ 00:01:25.123 START TEST ubsan 00:01:25.123 ************************************ 00:01:25.123 20:03:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:25.123 using ubsan 00:01:25.123 00:01:25.123 real 0m0.000s 00:01:25.123 user 0m0.000s 00:01:25.123 sys 0m0.000s 00:01:25.123 20:03:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.123 20:03:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.123 ************************************ 00:01:25.123 END TEST ubsan 00:01:25.123 ************************************ 00:01:25.383 20:03:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.383 20:03:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.383 20:03:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.383 20:03:38 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:25.383 20:03:38 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:25.383 20:03:38 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:25.383 20:03:38 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:25.383 20:03:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.383 20:03:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.383 ************************************ 00:01:25.383 START TEST autobuild_llvm_precompile 00:01:25.383 ************************************ 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:25.383 Target: x86_64-redhat-linux-gnu 00:01:25.383 Thread model: posix 00:01:25.383 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:25.383 20:03:38 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:25.643 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:25.643 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:25.902 Using 'verbs' RDMA provider 00:01:42.204 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.438 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.698 Creating mk/config.mk...done. 00:01:54.698 Creating mk/cc.flags.mk...done. 00:01:54.698 Type 'make' to build. 00:01:54.698 00:01:54.698 real 0m29.338s 00:01:54.698 user 0m12.808s 00:01:54.698 sys 0m15.937s 00:01:54.698 20:04:07 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:54.698 20:04:07 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:54.698 ************************************ 00:01:54.698 END TEST autobuild_llvm_precompile 00:01:54.698 ************************************ 00:01:54.698 20:04:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:54.698 20:04:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:54.698 20:04:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:54.698 20:04:07 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:54.698 20:04:07 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:54.957 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:54.957 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:55.524 Using 'verbs' RDMA provider 00:02:08.689 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:18.678 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:19.505 Creating mk/config.mk...done. 00:02:19.505 Creating mk/cc.flags.mk...done. 00:02:19.505 Type 'make' to build. 00:02:19.506 20:04:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:19.506 20:04:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.506 20:04:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.506 20:04:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.506 ************************************ 00:02:19.506 START TEST make 00:02:19.506 ************************************ 00:02:19.506 20:04:32 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:19.764 make[1]: Nothing to be done for 'all'. 00:02:21.676 The Meson build system 00:02:21.676 Version: 1.5.0 00:02:21.676 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:21.676 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:21.676 Build type: native build 00:02:21.676 Project name: libvfio-user 00:02:21.676 Project version: 0.0.1 00:02:21.676 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:21.676 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:21.676 Host machine cpu family: x86_64 00:02:21.676 Host machine cpu: x86_64 00:02:21.676 Run-time dependency threads found: YES 00:02:21.676 Library dl found: YES 00:02:21.676 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.676 Run-time dependency json-c found: YES 0.17 00:02:21.676 Run-time dependency cmocka found: YES 1.1.7 00:02:21.676 Program pytest-3 found: NO 00:02:21.676 Program flake8 found: NO 00:02:21.676 Program misspell-fixer found: NO 00:02:21.676 Program restructuredtext-lint found: NO 00:02:21.676 Program valgrind found: YES (/usr/bin/valgrind) 00:02:21.676 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.676 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.676 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.676 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:21.676 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:21.676 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:21.676 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:21.676 Build targets in project: 8 00:02:21.676 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:21.676 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:21.676 00:02:21.676 libvfio-user 0.0.1 00:02:21.676 00:02:21.676 User defined options 00:02:21.676 buildtype : debug 00:02:21.676 default_library: static 00:02:21.676 libdir : /usr/local/lib 00:02:21.676 00:02:21.676 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.935 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:21.935 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:21.935 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:21.935 [3/36] Compiling C object samples/null.p/null.c.o 00:02:21.935 [4/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:21.935 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:21.935 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:21.935 [7/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:21.935 [8/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:21.935 [9/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:21.935 [10/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:21.935 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:21.935 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:21.935 [13/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:21.935 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:21.935 [15/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:21.935 [16/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:21.935 [17/36] Compiling C object samples/server.p/server.c.o 00:02:21.935 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:21.935 [19/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:21.935 [20/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:21.935 [21/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:21.935 [22/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:21.935 [23/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:21.935 [24/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:21.935 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:21.935 [26/36] Compiling C object samples/client.p/client.c.o 00:02:21.935 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:21.935 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:21.935 [29/36] Linking static target lib/libvfio-user.a 00:02:22.194 [30/36] Linking target samples/client 00:02:22.194 [31/36] Linking target test/unit_tests 00:02:22.194 [32/36] Linking target samples/server 00:02:22.194 [33/36] Linking target samples/gpio-pci-idio-16 00:02:22.194 [34/36] Linking target samples/shadow_ioeventfd_server 00:02:22.194 [35/36] Linking target samples/null 00:02:22.194 [36/36] Linking target samples/lspci 00:02:22.194 INFO: autodetecting backend as ninja 00:02:22.194 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:22.194 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:22.453 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:22.453 ninja: no work to do. 00:02:27.725 The Meson build system 00:02:27.725 Version: 1.5.0 00:02:27.725 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:27.725 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:27.725 Build type: native build 00:02:27.725 Program cat found: YES (/usr/bin/cat) 00:02:27.725 Project name: DPDK 00:02:27.725 Project version: 24.03.0 00:02:27.725 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:27.725 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:27.725 Host machine cpu family: x86_64 00:02:27.725 Host machine cpu: x86_64 00:02:27.725 Message: ## Building in Developer Mode ## 00:02:27.725 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:27.725 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:27.725 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:27.725 Program python3 found: YES (/usr/bin/python3) 00:02:27.725 Program cat found: YES (/usr/bin/cat) 00:02:27.725 Compiler for C supports arguments -march=native: YES 00:02:27.725 Checking for size of "void *" : 8 00:02:27.725 Checking for size of "void *" : 8 (cached) 00:02:27.725 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:27.725 Library m found: YES 00:02:27.725 Library numa found: YES 00:02:27.725 Has header "numaif.h" : YES 00:02:27.725 Library fdt found: NO 00:02:27.725 Library execinfo found: NO 00:02:27.725 Has header "execinfo.h" : YES 00:02:27.725 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:27.725 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:27.725 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:27.725 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:27.725 Run-time dependency openssl found: YES 3.1.1 00:02:27.725 Run-time dependency libpcap found: YES 1.10.4 00:02:27.725 Has header "pcap.h" with dependency libpcap: YES 00:02:27.725 Compiler for C supports arguments -Wcast-qual: YES 00:02:27.725 Compiler for C supports arguments -Wdeprecated: YES 00:02:27.725 Compiler for C supports arguments -Wformat: YES 00:02:27.725 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:27.725 Compiler for C supports arguments -Wformat-security: YES 00:02:27.725 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.725 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:27.725 Compiler for C supports arguments -Wnested-externs: YES 00:02:27.725 Compiler for C supports arguments -Wold-style-definition: YES 00:02:27.725 Compiler for C supports arguments -Wpointer-arith: YES 00:02:27.725 Compiler for C supports arguments -Wsign-compare: YES 00:02:27.725 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:27.725 Compiler for C supports arguments -Wundef: YES 00:02:27.725 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.725 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:27.725 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:27.725 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.725 Program objdump found: YES (/usr/bin/objdump) 00:02:27.725 Compiler for C supports arguments -mavx512f: YES 00:02:27.725 Checking if "AVX512 checking" compiles: YES 00:02:27.725 Fetching value of define "__SSE4_2__" : 1 00:02:27.725 Fetching value of define "__AES__" : 1 00:02:27.725 Fetching value of define "__AVX__" : 1 00:02:27.725 Fetching value of define "__AVX2__" : 1 00:02:27.725 Fetching value of define "__AVX512BW__" : 1 00:02:27.725 Fetching value of define "__AVX512CD__" : 1 00:02:27.725 Fetching value of define "__AVX512DQ__" : 1 00:02:27.725 Fetching value of define "__AVX512F__" : 1 00:02:27.725 Fetching value of define "__AVX512VL__" : 1 00:02:27.725 Fetching value of define "__PCLMUL__" : 1 00:02:27.725 Fetching value of define "__RDRND__" : 1 00:02:27.725 Fetching value of define "__RDSEED__" : 1 00:02:27.725 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:27.725 Fetching value of define "__znver1__" : (undefined) 00:02:27.725 Fetching value of define "__znver2__" : (undefined) 00:02:27.725 Fetching value of define "__znver3__" : (undefined) 00:02:27.725 Fetching value of define "__znver4__" : (undefined) 00:02:27.725 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:27.725 Message: lib/log: Defining dependency "log" 00:02:27.725 Message: lib/kvargs: Defining dependency "kvargs" 00:02:27.725 Message: lib/telemetry: Defining dependency "telemetry" 00:02:27.725 Checking for function "getentropy" : NO 00:02:27.725 Message: lib/eal: Defining dependency "eal" 00:02:27.725 Message: lib/ring: Defining dependency "ring" 00:02:27.725 Message: lib/rcu: Defining dependency "rcu" 00:02:27.725 Message: lib/mempool: Defining dependency "mempool" 00:02:27.725 Message: lib/mbuf: Defining dependency "mbuf" 00:02:27.725 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:27.725 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.725 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.725 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.725 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:27.725 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:27.725 Compiler for C supports arguments -mpclmul: YES 00:02:27.725 Compiler for C supports arguments -maes: YES 00:02:27.725 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.725 Compiler for C supports arguments -mavx512bw: YES 00:02:27.725 Compiler for C supports arguments -mavx512dq: YES 00:02:27.725 Compiler for C supports arguments -mavx512vl: YES 00:02:27.725 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:27.725 Compiler for C supports arguments -mavx2: YES 00:02:27.725 Compiler for C supports arguments -mavx: YES 00:02:27.725 Message: lib/net: Defining dependency "net" 00:02:27.725 Message: lib/meter: Defining dependency "meter" 00:02:27.725 Message: lib/ethdev: Defining dependency "ethdev" 00:02:27.725 Message: lib/pci: Defining dependency "pci" 00:02:27.725 Message: lib/cmdline: Defining dependency "cmdline" 00:02:27.725 Message: lib/hash: Defining dependency "hash" 00:02:27.725 Message: lib/timer: Defining dependency "timer" 00:02:27.725 Message: lib/compressdev: Defining dependency "compressdev" 00:02:27.725 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:27.725 Message: lib/dmadev: Defining dependency "dmadev" 00:02:27.725 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:27.725 Message: lib/power: Defining dependency "power" 00:02:27.725 Message: lib/reorder: Defining dependency "reorder" 00:02:27.725 Message: lib/security: Defining dependency "security" 00:02:27.725 Has header "linux/userfaultfd.h" : YES 00:02:27.725 Has header "linux/vduse.h" : YES 00:02:27.725 Message: lib/vhost: Defining dependency "vhost" 00:02:27.725 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:27.725 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.725 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.725 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.725 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:27.725 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:27.725 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:27.725 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:27.725 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:27.725 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:27.725 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:27.725 Configuring doxy-api-html.conf using configuration 00:02:27.725 Configuring doxy-api-man.conf using configuration 00:02:27.725 Program mandb found: YES (/usr/bin/mandb) 00:02:27.725 Program sphinx-build found: NO 00:02:27.725 Configuring rte_build_config.h using configuration 00:02:27.725 Message: 00:02:27.725 ================= 00:02:27.725 Applications Enabled 00:02:27.725 ================= 00:02:27.725 00:02:27.725 apps: 00:02:27.725 00:02:27.725 00:02:27.725 Message: 00:02:27.725 ================= 00:02:27.725 Libraries Enabled 00:02:27.726 ================= 00:02:27.726 00:02:27.726 libs: 00:02:27.726 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:27.726 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:27.726 cryptodev, dmadev, power, reorder, security, vhost, 00:02:27.726 00:02:27.726 Message: 00:02:27.726 =============== 00:02:27.726 Drivers Enabled 00:02:27.726 =============== 00:02:27.726 00:02:27.726 common: 00:02:27.726 00:02:27.726 bus: 00:02:27.726 pci, vdev, 00:02:27.726 mempool: 00:02:27.726 ring, 00:02:27.726 dma: 00:02:27.726 00:02:27.726 net: 00:02:27.726 00:02:27.726 crypto: 00:02:27.726 00:02:27.726 compress: 00:02:27.726 00:02:27.726 vdpa: 00:02:27.726 00:02:27.726 00:02:27.726 Message: 00:02:27.726 ================= 00:02:27.726 Content Skipped 00:02:27.726 ================= 00:02:27.726 00:02:27.726 apps: 00:02:27.726 dumpcap: explicitly disabled via build config 00:02:27.726 graph: explicitly disabled via build config 00:02:27.726 pdump: explicitly disabled via build config 00:02:27.726 proc-info: explicitly disabled via build config 00:02:27.726 test-acl: explicitly disabled via build config 00:02:27.726 test-bbdev: explicitly disabled via build config 00:02:27.726 test-cmdline: explicitly disabled via build config 00:02:27.726 test-compress-perf: explicitly disabled via build config 00:02:27.726 test-crypto-perf: explicitly disabled via build config 00:02:27.726 test-dma-perf: explicitly disabled via build config 00:02:27.726 test-eventdev: explicitly disabled via build config 00:02:27.726 test-fib: explicitly disabled via build config 00:02:27.726 test-flow-perf: explicitly disabled via build config 00:02:27.726 test-gpudev: explicitly disabled via build config 00:02:27.726 test-mldev: explicitly disabled via build config 00:02:27.726 test-pipeline: explicitly disabled via build config 00:02:27.726 test-pmd: explicitly disabled via build config 00:02:27.726 test-regex: explicitly disabled via build config 00:02:27.726 test-sad: explicitly disabled via build config 00:02:27.726 test-security-perf: explicitly disabled via build config 00:02:27.726 00:02:27.726 libs: 00:02:27.726 argparse: explicitly disabled via build config 00:02:27.726 metrics: explicitly disabled via build config 00:02:27.726 acl: explicitly disabled via build config 00:02:27.726 bbdev: explicitly disabled via build config 00:02:27.726 bitratestats: explicitly disabled via build config 00:02:27.726 bpf: explicitly disabled via build config 00:02:27.726 cfgfile: explicitly disabled via build config 00:02:27.726 distributor: explicitly disabled via build config 00:02:27.726 efd: explicitly disabled via build config 00:02:27.726 eventdev: explicitly disabled via build config 00:02:27.726 dispatcher: explicitly disabled via build config 00:02:27.726 gpudev: explicitly disabled via build config 00:02:27.726 gro: explicitly disabled via build config 00:02:27.726 gso: explicitly disabled via build config 00:02:27.726 ip_frag: explicitly disabled via build config 00:02:27.726 jobstats: explicitly disabled via build config 00:02:27.726 latencystats: explicitly disabled via build config 00:02:27.726 lpm: explicitly disabled via build config 00:02:27.726 member: explicitly disabled via build config 00:02:27.726 pcapng: explicitly disabled via build config 00:02:27.726 rawdev: explicitly disabled via build config 00:02:27.726 regexdev: explicitly disabled via build config 00:02:27.726 mldev: explicitly disabled via build config 00:02:27.726 rib: explicitly disabled via build config 00:02:27.726 sched: explicitly disabled via build config 00:02:27.726 stack: explicitly disabled via build config 00:02:27.726 ipsec: explicitly disabled via build config 00:02:27.726 pdcp: explicitly disabled via build config 00:02:27.726 fib: explicitly disabled via build config 00:02:27.726 port: explicitly disabled via build config 00:02:27.726 pdump: explicitly disabled via build config 00:02:27.726 table: explicitly disabled via build config 00:02:27.726 pipeline: explicitly disabled via build config 00:02:27.726 graph: explicitly disabled via build config 00:02:27.726 node: explicitly disabled via build config 00:02:27.726 00:02:27.726 drivers: 00:02:27.726 common/cpt: not in enabled drivers build config 00:02:27.726 common/dpaax: not in enabled drivers build config 00:02:27.726 common/iavf: not in enabled drivers build config 00:02:27.726 common/idpf: not in enabled drivers build config 00:02:27.726 common/ionic: not in enabled drivers build config 00:02:27.726 common/mvep: not in enabled drivers build config 00:02:27.726 common/octeontx: not in enabled drivers build config 00:02:27.726 bus/auxiliary: not in enabled drivers build config 00:02:27.726 bus/cdx: not in enabled drivers build config 00:02:27.726 bus/dpaa: not in enabled drivers build config 00:02:27.726 bus/fslmc: not in enabled drivers build config 00:02:27.726 bus/ifpga: not in enabled drivers build config 00:02:27.726 bus/platform: not in enabled drivers build config 00:02:27.726 bus/uacce: not in enabled drivers build config 00:02:27.726 bus/vmbus: not in enabled drivers build config 00:02:27.726 common/cnxk: not in enabled drivers build config 00:02:27.726 common/mlx5: not in enabled drivers build config 00:02:27.726 common/nfp: not in enabled drivers build config 00:02:27.726 common/nitrox: not in enabled drivers build config 00:02:27.726 common/qat: not in enabled drivers build config 00:02:27.726 common/sfc_efx: not in enabled drivers build config 00:02:27.726 mempool/bucket: not in enabled drivers build config 00:02:27.726 mempool/cnxk: not in enabled drivers build config 00:02:27.726 mempool/dpaa: not in enabled drivers build config 00:02:27.726 mempool/dpaa2: not in enabled drivers build config 00:02:27.726 mempool/octeontx: not in enabled drivers build config 00:02:27.726 mempool/stack: not in enabled drivers build config 00:02:27.726 dma/cnxk: not in enabled drivers build config 00:02:27.726 dma/dpaa: not in enabled drivers build config 00:02:27.726 dma/dpaa2: not in enabled drivers build config 00:02:27.726 dma/hisilicon: not in enabled drivers build config 00:02:27.726 dma/idxd: not in enabled drivers build config 00:02:27.726 dma/ioat: not in enabled drivers build config 00:02:27.726 dma/skeleton: not in enabled drivers build config 00:02:27.726 net/af_packet: not in enabled drivers build config 00:02:27.726 net/af_xdp: not in enabled drivers build config 00:02:27.726 net/ark: not in enabled drivers build config 00:02:27.726 net/atlantic: not in enabled drivers build config 00:02:27.726 net/avp: not in enabled drivers build config 00:02:27.726 net/axgbe: not in enabled drivers build config 00:02:27.726 net/bnx2x: not in enabled drivers build config 00:02:27.726 net/bnxt: not in enabled drivers build config 00:02:27.726 net/bonding: not in enabled drivers build config 00:02:27.726 net/cnxk: not in enabled drivers build config 00:02:27.726 net/cpfl: not in enabled drivers build config 00:02:27.726 net/cxgbe: not in enabled drivers build config 00:02:27.726 net/dpaa: not in enabled drivers build config 00:02:27.726 net/dpaa2: not in enabled drivers build config 00:02:27.726 net/e1000: not in enabled drivers build config 00:02:27.726 net/ena: not in enabled drivers build config 00:02:27.726 net/enetc: not in enabled drivers build config 00:02:27.726 net/enetfec: not in enabled drivers build config 00:02:27.726 net/enic: not in enabled drivers build config 00:02:27.726 net/failsafe: not in enabled drivers build config 00:02:27.726 net/fm10k: not in enabled drivers build config 00:02:27.726 net/gve: not in enabled drivers build config 00:02:27.726 net/hinic: not in enabled drivers build config 00:02:27.726 net/hns3: not in enabled drivers build config 00:02:27.726 net/i40e: not in enabled drivers build config 00:02:27.726 net/iavf: not in enabled drivers build config 00:02:27.726 net/ice: not in enabled drivers build config 00:02:27.726 net/idpf: not in enabled drivers build config 00:02:27.726 net/igc: not in enabled drivers build config 00:02:27.726 net/ionic: not in enabled drivers build config 00:02:27.726 net/ipn3ke: not in enabled drivers build config 00:02:27.726 net/ixgbe: not in enabled drivers build config 00:02:27.726 net/mana: not in enabled drivers build config 00:02:27.726 net/memif: not in enabled drivers build config 00:02:27.726 net/mlx4: not in enabled drivers build config 00:02:27.726 net/mlx5: not in enabled drivers build config 00:02:27.726 net/mvneta: not in enabled drivers build config 00:02:27.726 net/mvpp2: not in enabled drivers build config 00:02:27.726 net/netvsc: not in enabled drivers build config 00:02:27.726 net/nfb: not in enabled drivers build config 00:02:27.726 net/nfp: not in enabled drivers build config 00:02:27.726 net/ngbe: not in enabled drivers build config 00:02:27.726 net/null: not in enabled drivers build config 00:02:27.726 net/octeontx: not in enabled drivers build config 00:02:27.726 net/octeon_ep: not in enabled drivers build config 00:02:27.726 net/pcap: not in enabled drivers build config 00:02:27.726 net/pfe: not in enabled drivers build config 00:02:27.726 net/qede: not in enabled drivers build config 00:02:27.726 net/ring: not in enabled drivers build config 00:02:27.726 net/sfc: not in enabled drivers build config 00:02:27.726 net/softnic: not in enabled drivers build config 00:02:27.726 net/tap: not in enabled drivers build config 00:02:27.726 net/thunderx: not in enabled drivers build config 00:02:27.726 net/txgbe: not in enabled drivers build config 00:02:27.726 net/vdev_netvsc: not in enabled drivers build config 00:02:27.726 net/vhost: not in enabled drivers build config 00:02:27.726 net/virtio: not in enabled drivers build config 00:02:27.726 net/vmxnet3: not in enabled drivers build config 00:02:27.726 raw/*: missing internal dependency, "rawdev" 00:02:27.726 crypto/armv8: not in enabled drivers build config 00:02:27.726 crypto/bcmfs: not in enabled drivers build config 00:02:27.726 crypto/caam_jr: not in enabled drivers build config 00:02:27.726 crypto/ccp: not in enabled drivers build config 00:02:27.726 crypto/cnxk: not in enabled drivers build config 00:02:27.726 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.726 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.726 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.726 crypto/mlx5: not in enabled drivers build config 00:02:27.726 crypto/mvsam: not in enabled drivers build config 00:02:27.726 crypto/nitrox: not in enabled drivers build config 00:02:27.726 crypto/null: not in enabled drivers build config 00:02:27.726 crypto/octeontx: not in enabled drivers build config 00:02:27.726 crypto/openssl: not in enabled drivers build config 00:02:27.727 crypto/scheduler: not in enabled drivers build config 00:02:27.727 crypto/uadk: not in enabled drivers build config 00:02:27.727 crypto/virtio: not in enabled drivers build config 00:02:27.727 compress/isal: not in enabled drivers build config 00:02:27.727 compress/mlx5: not in enabled drivers build config 00:02:27.727 compress/nitrox: not in enabled drivers build config 00:02:27.727 compress/octeontx: not in enabled drivers build config 00:02:27.727 compress/zlib: not in enabled drivers build config 00:02:27.727 regex/*: missing internal dependency, "regexdev" 00:02:27.727 ml/*: missing internal dependency, "mldev" 00:02:27.727 vdpa/ifc: not in enabled drivers build config 00:02:27.727 vdpa/mlx5: not in enabled drivers build config 00:02:27.727 vdpa/nfp: not in enabled drivers build config 00:02:27.727 vdpa/sfc: not in enabled drivers build config 00:02:27.727 event/*: missing internal dependency, "eventdev" 00:02:27.727 baseband/*: missing internal dependency, "bbdev" 00:02:27.727 gpu/*: missing internal dependency, "gpudev" 00:02:27.727 00:02:27.727 00:02:27.727 Build targets in project: 85 00:02:27.727 00:02:27.727 DPDK 24.03.0 00:02:27.727 00:02:27.727 User defined options 00:02:27.727 buildtype : debug 00:02:27.727 default_library : static 00:02:27.727 libdir : lib 00:02:27.727 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:27.727 c_args : -fPIC -Werror 00:02:27.727 c_link_args : 00:02:27.727 cpu_instruction_set: native 00:02:27.727 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:27.727 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:27.727 enable_docs : false 00:02:27.727 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:27.727 enable_kmods : false 00:02:27.727 max_lcores : 128 00:02:27.727 tests : false 00:02:27.727 00:02:27.727 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.989 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:27.989 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:27.989 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.989 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.989 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.989 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:27.989 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:27.989 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.989 [8/268] Linking static target lib/librte_kvargs.a 00:02:27.989 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.989 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.989 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:27.989 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:27.989 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.989 [14/268] Linking static target lib/librte_log.a 00:02:27.989 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:27.989 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.989 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.989 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.989 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.251 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.251 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:28.251 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:28.251 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:28.251 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:28.251 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:28.251 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:28.251 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:28.251 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:28.251 [29/268] Linking static target lib/librte_pci.a 00:02:28.251 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:28.251 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:28.251 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:28.251 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.251 [34/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.251 [35/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.511 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.511 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:28.511 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.511 [39/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.511 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.511 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.511 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.511 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.511 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.511 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.511 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.511 [47/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.511 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.511 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.511 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.511 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.511 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.511 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.511 [54/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.511 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.511 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.511 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.511 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:28.511 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.511 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.511 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.511 [62/268] Linking static target lib/librte_meter.a 00:02:28.511 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.511 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.511 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:28.511 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:28.511 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.511 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.511 [69/268] Linking static target lib/librte_ring.a 00:02:28.511 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.511 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.511 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.511 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.511 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.511 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.511 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:28.511 [77/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.511 [78/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.511 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:28.511 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:28.511 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.511 [82/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.511 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.511 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.770 [85/268] Linking static target lib/librte_telemetry.a 00:02:28.770 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:28.770 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.770 [88/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:28.770 [89/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:28.770 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:28.770 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.770 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:28.770 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:28.771 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:28.771 [95/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.771 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.771 [97/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:28.771 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.771 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.771 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.771 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.771 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.771 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.771 [104/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.771 [105/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:28.771 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.771 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.771 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.771 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.771 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.771 [111/268] Linking static target lib/librte_timer.a 00:02:28.771 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.771 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.771 [114/268] Linking static target lib/librte_cmdline.a 00:02:28.771 [115/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.771 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.771 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.771 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.771 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.771 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.771 [121/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:28.771 [122/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:28.771 [123/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:28.771 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.771 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.771 [126/268] Linking static target lib/librte_mempool.a 00:02:28.771 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.771 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:28.771 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:28.771 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:28.771 [131/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.771 [132/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:28.771 [133/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.771 [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:28.771 [135/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.771 [136/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.771 [137/268] Linking static target lib/librte_net.a 00:02:28.771 [138/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.771 [139/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:28.771 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.771 [141/268] Linking static target lib/librte_eal.a 00:02:28.771 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.771 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:28.771 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.771 [145/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.771 [146/268] Linking static target lib/librte_rcu.a 00:02:28.771 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.771 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.771 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.771 [150/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:28.771 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.771 [152/268] Linking static target lib/librte_mbuf.a 00:02:28.771 [153/268] Linking static target lib/librte_compressdev.a 00:02:28.771 [154/268] Linking static target lib/librte_dmadev.a 00:02:28.771 [155/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.030 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:29.030 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.030 [158/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.030 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.030 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.030 [161/268] Linking target lib/librte_log.so.24.1 00:02:29.030 [162/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.030 [163/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.030 [164/268] Linking static target lib/librte_hash.a 00:02:29.030 [165/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.030 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.030 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.030 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:29.030 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.030 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.030 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:29.030 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:29.030 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.030 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:29.030 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.030 [176/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:29.030 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.030 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.030 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:29.030 [180/268] Linking static target lib/librte_reorder.a 00:02:29.030 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:29.030 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:29.030 [183/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.030 [184/268] Linking static target lib/librte_power.a 00:02:29.030 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:29.030 [186/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:29.030 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.030 [188/268] Linking static target lib/librte_cryptodev.a 00:02:29.289 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:29.289 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.289 [191/268] Linking target lib/librte_kvargs.so.24.1 00:02:29.289 [192/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.289 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.289 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.289 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:29.289 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.289 [197/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.289 [198/268] Linking static target lib/librte_security.a 00:02:29.289 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.289 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.289 [201/268] Linking static target drivers/librte_bus_vdev.a 00:02:29.289 [202/268] Linking target lib/librte_telemetry.so.24.1 00:02:29.289 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:29.289 [204/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:29.289 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.289 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.289 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:29.289 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.289 [209/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:29.289 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:29.289 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.289 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.289 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.547 [214/268] Linking static target lib/librte_ethdev.a 00:02:29.547 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:29.547 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.547 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.547 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.547 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.548 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.548 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.806 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.806 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.069 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.069 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.069 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.069 [227/268] Linking static target lib/librte_vhost.a 00:02:30.069 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.327 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.263 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.199 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.790 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.322 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.581 [234/268] Linking target lib/librte_eal.so.24.1 00:02:41.581 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.839 [236/268] Linking target lib/librte_meter.so.24.1 00:02:41.839 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.839 [238/268] Linking target lib/librte_ring.so.24.1 00:02:41.839 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.839 [240/268] Linking target lib/librte_pci.so.24.1 00:02:41.839 [241/268] Linking target lib/librte_timer.so.24.1 00:02:41.839 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.839 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.839 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.839 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.839 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.839 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.839 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:41.839 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:42.098 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.098 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.098 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:42.098 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.356 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:42.356 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:42.356 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:42.356 [257/268] Linking target lib/librte_net.so.24.1 00:02:42.356 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:42.615 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.615 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.615 [261/268] Linking target lib/librte_hash.so.24.1 00:02:42.615 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.615 [263/268] Linking target lib/librte_security.so.24.1 00:02:42.615 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.615 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.615 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.874 [267/268] Linking target lib/librte_power.so.24.1 00:02:42.874 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:42.874 INFO: autodetecting backend as ninja 00:02:42.874 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:43.811 CC lib/ut/ut.o 00:02:43.811 CC lib/log/log.o 00:02:43.811 CC lib/log/log_flags.o 00:02:43.811 CC lib/log/log_deprecated.o 00:02:43.811 CC lib/ut_mock/mock.o 00:02:43.811 LIB libspdk_ut.a 00:02:44.069 LIB libspdk_log.a 00:02:44.069 LIB libspdk_ut_mock.a 00:02:44.327 CC lib/util/base64.o 00:02:44.327 CC lib/util/bit_array.o 00:02:44.327 CC lib/util/cpuset.o 00:02:44.327 CC lib/util/crc16.o 00:02:44.327 CC lib/util/crc32.o 00:02:44.327 CC lib/util/crc32c.o 00:02:44.327 CC lib/util/crc32_ieee.o 00:02:44.327 CC lib/util/crc64.o 00:02:44.327 CC lib/util/dif.o 00:02:44.327 CC lib/util/fd.o 00:02:44.328 CC lib/util/fd_group.o 00:02:44.328 CC lib/util/file.o 00:02:44.328 CC lib/util/hexlify.o 00:02:44.328 CC lib/util/iov.o 00:02:44.328 CC lib/util/math.o 00:02:44.328 CC lib/util/net.o 00:02:44.328 CC lib/util/pipe.o 00:02:44.328 CC lib/util/strerror_tls.o 00:02:44.328 CC lib/util/string.o 00:02:44.328 CC lib/util/uuid.o 00:02:44.328 CC lib/util/xor.o 00:02:44.328 CC lib/util/zipf.o 00:02:44.328 CC lib/util/md5.o 00:02:44.328 CC lib/ioat/ioat.o 00:02:44.328 CXX lib/trace_parser/trace.o 00:02:44.328 CC lib/dma/dma.o 00:02:44.328 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.328 CC lib/vfio_user/host/vfio_user.o 00:02:44.328 LIB libspdk_dma.a 00:02:44.328 LIB libspdk_ioat.a 00:02:44.586 LIB libspdk_vfio_user.a 00:02:44.586 LIB libspdk_util.a 00:02:44.845 LIB libspdk_trace_parser.a 00:02:44.845 CC lib/conf/conf.o 00:02:44.845 CC lib/env_dpdk/env.o 00:02:44.845 CC lib/env_dpdk/memory.o 00:02:44.845 CC lib/env_dpdk/pci.o 00:02:44.845 CC lib/env_dpdk/init.o 00:02:44.845 CC lib/env_dpdk/pci_virtio.o 00:02:44.845 CC lib/env_dpdk/pci_ioat.o 00:02:44.845 CC lib/env_dpdk/threads.o 00:02:44.845 CC lib/env_dpdk/pci_vmd.o 00:02:44.845 CC lib/env_dpdk/pci_idxd.o 00:02:44.845 CC lib/env_dpdk/pci_event.o 00:02:44.845 CC lib/env_dpdk/sigbus_handler.o 00:02:44.845 CC lib/env_dpdk/pci_dpdk.o 00:02:44.845 CC lib/idxd/idxd.o 00:02:44.845 CC lib/idxd/idxd_user.o 00:02:44.845 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.845 CC lib/idxd/idxd_kernel.o 00:02:44.845 CC lib/json/json_util.o 00:02:44.845 CC lib/json/json_parse.o 00:02:44.845 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.845 CC lib/json/json_write.o 00:02:44.845 CC lib/vmd/vmd.o 00:02:44.845 CC lib/vmd/led.o 00:02:44.845 CC lib/rdma_utils/rdma_utils.o 00:02:45.103 LIB libspdk_conf.a 00:02:45.103 LIB libspdk_json.a 00:02:45.103 LIB libspdk_rdma_utils.a 00:02:45.103 LIB libspdk_idxd.a 00:02:45.103 LIB libspdk_vmd.a 00:02:45.362 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.362 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.362 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.362 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.362 CC lib/rdma_provider/common.o 00:02:45.362 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.620 LIB libspdk_jsonrpc.a 00:02:45.620 LIB libspdk_rdma_provider.a 00:02:46.016 CC lib/rpc/rpc.o 00:02:46.016 LIB libspdk_env_dpdk.a 00:02:46.016 LIB libspdk_rpc.a 00:02:46.299 CC lib/trace/trace_rpc.o 00:02:46.299 CC lib/trace/trace.o 00:02:46.299 CC lib/trace/trace_flags.o 00:02:46.299 CC lib/keyring/keyring.o 00:02:46.299 CC lib/keyring/keyring_rpc.o 00:02:46.299 CC lib/notify/notify.o 00:02:46.299 CC lib/notify/notify_rpc.o 00:02:46.299 LIB libspdk_notify.a 00:02:46.299 LIB libspdk_trace.a 00:02:46.299 LIB libspdk_keyring.a 00:02:46.581 CC lib/sock/sock.o 00:02:46.581 CC lib/sock/sock_rpc.o 00:02:46.581 CC lib/thread/thread.o 00:02:46.581 CC lib/thread/iobuf.o 00:02:46.852 LIB libspdk_sock.a 00:02:47.118 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.118 CC lib/nvme/nvme_ctrlr.o 00:02:47.118 CC lib/nvme/nvme_fabric.o 00:02:47.118 CC lib/nvme/nvme_ns_cmd.o 00:02:47.118 CC lib/nvme/nvme_ns.o 00:02:47.118 CC lib/nvme/nvme_pcie_common.o 00:02:47.118 CC lib/nvme/nvme_pcie.o 00:02:47.118 CC lib/nvme/nvme_qpair.o 00:02:47.119 CC lib/nvme/nvme.o 00:02:47.119 CC lib/nvme/nvme_quirks.o 00:02:47.119 CC lib/nvme/nvme_transport.o 00:02:47.119 CC lib/nvme/nvme_discovery.o 00:02:47.119 CC lib/nvme/nvme_tcp.o 00:02:47.119 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.119 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.119 CC lib/nvme/nvme_opal.o 00:02:47.119 CC lib/nvme/nvme_io_msg.o 00:02:47.119 CC lib/nvme/nvme_stubs.o 00:02:47.119 CC lib/nvme/nvme_poll_group.o 00:02:47.119 CC lib/nvme/nvme_zns.o 00:02:47.119 CC lib/nvme/nvme_vfio_user.o 00:02:47.119 CC lib/nvme/nvme_auth.o 00:02:47.119 CC lib/nvme/nvme_rdma.o 00:02:47.119 CC lib/nvme/nvme_cuse.o 00:02:47.684 LIB libspdk_thread.a 00:02:47.685 CC lib/accel/accel.o 00:02:47.685 CC lib/accel/accel_rpc.o 00:02:47.685 CC lib/blob/blobstore.o 00:02:47.685 CC lib/blob/blob_bs_dev.o 00:02:47.685 CC lib/blob/request.o 00:02:47.685 CC lib/accel/accel_sw.o 00:02:47.685 CC lib/blob/zeroes.o 00:02:47.685 CC lib/init/json_config.o 00:02:47.685 CC lib/init/subsystem.o 00:02:47.685 CC lib/init/subsystem_rpc.o 00:02:47.685 CC lib/init/rpc.o 00:02:47.943 CC lib/fsdev/fsdev.o 00:02:47.943 CC lib/fsdev/fsdev_io.o 00:02:47.943 CC lib/virtio/virtio.o 00:02:47.943 CC lib/vfu_tgt/tgt_endpoint.o 00:02:47.943 CC lib/fsdev/fsdev_rpc.o 00:02:47.943 CC lib/virtio/virtio_vhost_user.o 00:02:47.943 CC lib/vfu_tgt/tgt_rpc.o 00:02:47.943 CC lib/virtio/virtio_vfio_user.o 00:02:47.943 CC lib/virtio/virtio_pci.o 00:02:47.943 LIB libspdk_init.a 00:02:47.943 LIB libspdk_virtio.a 00:02:47.943 LIB libspdk_vfu_tgt.a 00:02:48.201 LIB libspdk_fsdev.a 00:02:48.201 CC lib/event/app.o 00:02:48.201 CC lib/event/reactor.o 00:02:48.201 CC lib/event/log_rpc.o 00:02:48.201 CC lib/event/app_rpc.o 00:02:48.201 CC lib/event/scheduler_static.o 00:02:48.460 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:48.460 LIB libspdk_nvme.a 00:02:48.460 LIB libspdk_accel.a 00:02:48.460 LIB libspdk_event.a 00:02:48.718 CC lib/bdev/bdev.o 00:02:48.718 CC lib/bdev/part.o 00:02:48.718 CC lib/bdev/bdev_rpc.o 00:02:48.718 CC lib/bdev/bdev_zone.o 00:02:48.718 CC lib/bdev/scsi_nvme.o 00:02:48.977 LIB libspdk_fuse_dispatcher.a 00:02:49.545 LIB libspdk_blob.a 00:02:49.804 CC lib/lvol/lvol.o 00:02:49.804 CC lib/blobfs/blobfs.o 00:02:49.804 CC lib/blobfs/tree.o 00:02:50.372 LIB libspdk_lvol.a 00:02:50.372 LIB libspdk_blobfs.a 00:02:50.631 LIB libspdk_bdev.a 00:02:50.889 CC lib/nbd/nbd.o 00:02:50.889 CC lib/nvmf/ctrlr.o 00:02:50.889 CC lib/nbd/nbd_rpc.o 00:02:50.889 CC lib/nvmf/ctrlr_discovery.o 00:02:50.889 CC lib/nvmf/ctrlr_bdev.o 00:02:50.889 CC lib/nvmf/subsystem.o 00:02:50.889 CC lib/nvmf/nvmf.o 00:02:50.889 CC lib/nvmf/nvmf_rpc.o 00:02:50.889 CC lib/nvmf/tcp.o 00:02:50.889 CC lib/nvmf/transport.o 00:02:50.889 CC lib/scsi/dev.o 00:02:50.889 CC lib/nvmf/stubs.o 00:02:50.889 CC lib/scsi/lun.o 00:02:50.889 CC lib/nvmf/rdma.o 00:02:50.889 CC lib/nvmf/mdns_server.o 00:02:50.889 CC lib/scsi/port.o 00:02:50.889 CC lib/nvmf/vfio_user.o 00:02:50.889 CC lib/scsi/scsi.o 00:02:50.889 CC lib/scsi/scsi_bdev.o 00:02:50.889 CC lib/nvmf/auth.o 00:02:50.889 CC lib/scsi/scsi_pr.o 00:02:50.889 CC lib/scsi/scsi_rpc.o 00:02:50.889 CC lib/scsi/task.o 00:02:50.889 CC lib/ftl/ftl_core.o 00:02:50.889 CC lib/ftl/ftl_init.o 00:02:50.889 CC lib/ftl/ftl_layout.o 00:02:50.889 CC lib/ftl/ftl_debug.o 00:02:50.889 CC lib/ftl/ftl_io.o 00:02:50.889 CC lib/ublk/ublk.o 00:02:50.889 CC lib/ftl/ftl_sb.o 00:02:50.889 CC lib/ftl/ftl_l2p.o 00:02:50.889 CC lib/ublk/ublk_rpc.o 00:02:50.889 CC lib/ftl/ftl_l2p_flat.o 00:02:50.889 CC lib/ftl/ftl_nv_cache.o 00:02:50.889 CC lib/ftl/ftl_band.o 00:02:50.889 CC lib/ftl/ftl_band_ops.o 00:02:50.889 CC lib/ftl/ftl_writer.o 00:02:50.889 CC lib/ftl/ftl_rq.o 00:02:50.889 CC lib/ftl/ftl_reloc.o 00:02:50.889 CC lib/ftl/ftl_p2l.o 00:02:50.889 CC lib/ftl/ftl_l2p_cache.o 00:02:50.889 CC lib/ftl/ftl_p2l_log.o 00:02:50.889 CC lib/ftl/mngt/ftl_mngt.o 00:02:50.889 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:50.889 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:50.889 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:50.890 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:50.890 CC lib/ftl/utils/ftl_conf.o 00:02:50.890 CC lib/ftl/utils/ftl_md.o 00:02:50.890 CC lib/ftl/utils/ftl_mempool.o 00:02:50.890 CC lib/ftl/utils/ftl_bitmap.o 00:02:50.890 CC lib/ftl/utils/ftl_property.o 00:02:50.890 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:50.890 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:50.890 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:50.890 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:50.890 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:50.890 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:50.890 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:50.890 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:50.890 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:50.890 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:50.890 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:50.890 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:50.890 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:50.890 CC lib/ftl/base/ftl_base_dev.o 00:02:50.890 CC lib/ftl/ftl_trace.o 00:02:50.890 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.147 LIB libspdk_nbd.a 00:02:51.147 LIB libspdk_scsi.a 00:02:51.406 LIB libspdk_ublk.a 00:02:51.406 LIB libspdk_ftl.a 00:02:51.665 CC lib/iscsi/conn.o 00:02:51.665 CC lib/iscsi/init_grp.o 00:02:51.665 CC lib/iscsi/iscsi.o 00:02:51.665 CC lib/iscsi/param.o 00:02:51.665 CC lib/iscsi/portal_grp.o 00:02:51.665 CC lib/iscsi/tgt_node.o 00:02:51.665 CC lib/iscsi/iscsi_subsystem.o 00:02:51.665 CC lib/iscsi/iscsi_rpc.o 00:02:51.665 CC lib/vhost/vhost.o 00:02:51.665 CC lib/iscsi/task.o 00:02:51.665 CC lib/vhost/vhost_blk.o 00:02:51.665 CC lib/vhost/vhost_rpc.o 00:02:51.665 CC lib/vhost/rte_vhost_user.o 00:02:51.665 CC lib/vhost/vhost_scsi.o 00:02:51.924 LIB libspdk_nvmf.a 00:02:52.183 LIB libspdk_vhost.a 00:02:52.183 LIB libspdk_iscsi.a 00:02:52.752 CC module/vfu_device/vfu_virtio.o 00:02:52.752 CC module/vfu_device/vfu_virtio_scsi.o 00:02:52.752 CC module/vfu_device/vfu_virtio_rpc.o 00:02:52.752 CC module/vfu_device/vfu_virtio_blk.o 00:02:52.752 CC module/vfu_device/vfu_virtio_fs.o 00:02:52.752 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.752 CC module/keyring/linux/keyring_rpc.o 00:02:52.752 CC module/keyring/linux/keyring.o 00:02:52.752 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:52.752 CC module/keyring/file/keyring.o 00:02:52.752 CC module/keyring/file/keyring_rpc.o 00:02:52.752 CC module/scheduler/gscheduler/gscheduler.o 00:02:52.752 CC module/accel/dsa/accel_dsa.o 00:02:52.752 CC module/accel/dsa/accel_dsa_rpc.o 00:02:52.752 CC module/accel/iaa/accel_iaa.o 00:02:52.752 CC module/accel/iaa/accel_iaa_rpc.o 00:02:52.752 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:52.752 CC module/accel/error/accel_error.o 00:02:52.752 CC module/accel/error/accel_error_rpc.o 00:02:52.752 LIB libspdk_env_dpdk_rpc.a 00:02:53.011 CC module/sock/posix/posix.o 00:02:53.011 CC module/accel/ioat/accel_ioat.o 00:02:53.011 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.011 CC module/blob/bdev/blob_bdev.o 00:02:53.011 CC module/fsdev/aio/fsdev_aio.o 00:02:53.011 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:53.011 CC module/fsdev/aio/linux_aio_mgr.o 00:02:53.011 LIB libspdk_keyring_linux.a 00:02:53.011 LIB libspdk_keyring_file.a 00:02:53.011 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.011 LIB libspdk_scheduler_gscheduler.a 00:02:53.011 LIB libspdk_scheduler_dynamic.a 00:02:53.011 LIB libspdk_accel_iaa.a 00:02:53.011 LIB libspdk_accel_error.a 00:02:53.011 LIB libspdk_accel_ioat.a 00:02:53.011 LIB libspdk_accel_dsa.a 00:02:53.011 LIB libspdk_blob_bdev.a 00:02:53.011 LIB libspdk_vfu_device.a 00:02:53.271 LIB libspdk_sock_posix.a 00:02:53.271 LIB libspdk_fsdev_aio.a 00:02:53.530 CC module/bdev/null/bdev_null.o 00:02:53.530 CC module/bdev/null/bdev_null_rpc.o 00:02:53.530 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.530 CC module/bdev/split/vbdev_split.o 00:02:53.530 CC module/bdev/delay/vbdev_delay.o 00:02:53.530 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.530 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.530 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.530 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.530 CC module/bdev/raid/bdev_raid.o 00:02:53.530 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.530 CC module/bdev/gpt/gpt.o 00:02:53.530 CC module/bdev/raid/bdev_raid_rpc.o 00:02:53.530 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.530 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.530 CC module/bdev/raid/raid0.o 00:02:53.530 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.530 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.530 CC module/bdev/error/vbdev_error.o 00:02:53.530 CC module/bdev/malloc/bdev_malloc.o 00:02:53.530 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.530 CC module/bdev/raid/raid1.o 00:02:53.530 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.530 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.530 CC module/bdev/raid/concat.o 00:02:53.530 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.530 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.530 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.530 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.530 CC module/bdev/nvme/bdev_nvme.o 00:02:53.530 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.530 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:53.530 CC module/bdev/nvme/vbdev_opal.o 00:02:53.530 CC module/bdev/nvme/nvme_rpc.o 00:02:53.530 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.530 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.530 CC module/bdev/aio/bdev_aio.o 00:02:53.530 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.530 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.530 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.530 CC module/bdev/ftl/bdev_ftl.o 00:02:53.530 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.530 LIB libspdk_blobfs_bdev.a 00:02:53.789 LIB libspdk_bdev_split.a 00:02:53.789 LIB libspdk_bdev_error.a 00:02:53.789 LIB libspdk_bdev_null.a 00:02:53.789 LIB libspdk_bdev_gpt.a 00:02:53.789 LIB libspdk_bdev_ftl.a 00:02:53.789 LIB libspdk_bdev_passthru.a 00:02:53.789 LIB libspdk_bdev_zone_block.a 00:02:53.789 LIB libspdk_bdev_iscsi.a 00:02:53.789 LIB libspdk_bdev_aio.a 00:02:53.789 LIB libspdk_bdev_delay.a 00:02:53.789 LIB libspdk_bdev_malloc.a 00:02:53.789 LIB libspdk_bdev_virtio.a 00:02:53.789 LIB libspdk_bdev_lvol.a 00:02:54.048 LIB libspdk_bdev_raid.a 00:02:54.983 LIB libspdk_bdev_nvme.a 00:02:55.552 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.552 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.552 CC module/event/subsystems/vmd/vmd.o 00:02:55.552 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.552 CC module/event/subsystems/keyring/keyring.o 00:02:55.552 CC module/event/subsystems/sock/sock.o 00:02:55.552 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.552 CC module/event/subsystems/fsdev/fsdev.o 00:02:55.552 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.552 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:55.552 LIB libspdk_event_iobuf.a 00:02:55.552 LIB libspdk_event_vmd.a 00:02:55.552 LIB libspdk_event_keyring.a 00:02:55.552 LIB libspdk_event_vhost_blk.a 00:02:55.552 LIB libspdk_event_sock.a 00:02:55.552 LIB libspdk_event_fsdev.a 00:02:55.552 LIB libspdk_event_vfu_tgt.a 00:02:55.552 LIB libspdk_event_scheduler.a 00:02:56.120 CC module/event/subsystems/accel/accel.o 00:02:56.120 LIB libspdk_event_accel.a 00:02:56.387 CC module/event/subsystems/bdev/bdev.o 00:02:56.646 LIB libspdk_event_bdev.a 00:02:56.905 CC module/event/subsystems/nbd/nbd.o 00:02:56.905 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:56.905 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:56.905 CC module/event/subsystems/scsi/scsi.o 00:02:56.905 CC module/event/subsystems/ublk/ublk.o 00:02:56.905 LIB libspdk_event_nbd.a 00:02:56.905 LIB libspdk_event_ublk.a 00:02:56.905 LIB libspdk_event_scsi.a 00:02:56.905 LIB libspdk_event_nvmf.a 00:02:57.165 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.165 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.424 LIB libspdk_event_vhost_scsi.a 00:02:57.424 LIB libspdk_event_iscsi.a 00:02:57.682 CC app/trace_record/trace_record.o 00:02:57.682 CC app/spdk_top/spdk_top.o 00:02:57.682 CC app/spdk_lspci/spdk_lspci.o 00:02:57.682 CC app/spdk_nvme_identify/identify.o 00:02:57.682 CC test/rpc_client/rpc_client_test.o 00:02:57.682 CXX app/trace/trace.o 00:02:57.682 CC app/spdk_nvme_perf/perf.o 00:02:57.682 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.682 TEST_HEADER include/spdk/assert.h 00:02:57.682 TEST_HEADER include/spdk/accel.h 00:02:57.682 TEST_HEADER include/spdk/accel_module.h 00:02:57.682 TEST_HEADER include/spdk/barrier.h 00:02:57.682 TEST_HEADER include/spdk/base64.h 00:02:57.682 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.682 TEST_HEADER include/spdk/bdev.h 00:02:57.682 TEST_HEADER include/spdk/bdev_module.h 00:02:57.682 TEST_HEADER include/spdk/bit_array.h 00:02:57.682 TEST_HEADER include/spdk/bit_pool.h 00:02:57.682 TEST_HEADER include/spdk/blob.h 00:02:57.682 TEST_HEADER include/spdk/blobfs.h 00:02:57.682 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.682 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.682 TEST_HEADER include/spdk/cpuset.h 00:02:57.682 TEST_HEADER include/spdk/config.h 00:02:57.682 TEST_HEADER include/spdk/conf.h 00:02:57.682 TEST_HEADER include/spdk/crc32.h 00:02:57.682 TEST_HEADER include/spdk/crc64.h 00:02:57.682 TEST_HEADER include/spdk/crc16.h 00:02:57.682 TEST_HEADER include/spdk/dif.h 00:02:57.683 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.683 CC app/spdk_dd/spdk_dd.o 00:02:57.683 TEST_HEADER include/spdk/env.h 00:02:57.683 TEST_HEADER include/spdk/dma.h 00:02:57.683 TEST_HEADER include/spdk/endian.h 00:02:57.683 TEST_HEADER include/spdk/fd_group.h 00:02:57.683 TEST_HEADER include/spdk/fd.h 00:02:57.683 TEST_HEADER include/spdk/event.h 00:02:57.683 TEST_HEADER include/spdk/file.h 00:02:57.683 TEST_HEADER include/spdk/fsdev.h 00:02:57.683 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.683 TEST_HEADER include/spdk/fsdev_module.h 00:02:57.683 TEST_HEADER include/spdk/ftl.h 00:02:57.683 CC app/spdk_tgt/spdk_tgt.o 00:02:57.683 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:57.683 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.683 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.683 TEST_HEADER include/spdk/hexlify.h 00:02:57.683 TEST_HEADER include/spdk/histogram_data.h 00:02:57.683 TEST_HEADER include/spdk/ioat.h 00:02:57.683 TEST_HEADER include/spdk/idxd.h 00:02:57.683 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.683 TEST_HEADER include/spdk/json.h 00:02:57.683 TEST_HEADER include/spdk/init.h 00:02:57.683 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.683 TEST_HEADER include/spdk/keyring.h 00:02:57.683 TEST_HEADER include/spdk/keyring_module.h 00:02:57.683 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.683 TEST_HEADER include/spdk/log.h 00:02:57.683 TEST_HEADER include/spdk/lvol.h 00:02:57.683 CC app/nvmf_tgt/nvmf_main.o 00:02:57.683 TEST_HEADER include/spdk/likely.h 00:02:57.683 TEST_HEADER include/spdk/md5.h 00:02:57.683 TEST_HEADER include/spdk/memory.h 00:02:57.683 TEST_HEADER include/spdk/nbd.h 00:02:57.683 TEST_HEADER include/spdk/mmio.h 00:02:57.683 TEST_HEADER include/spdk/net.h 00:02:57.683 TEST_HEADER include/spdk/nvme.h 00:02:57.683 TEST_HEADER include/spdk/notify.h 00:02:57.683 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.683 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.683 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.683 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.683 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.683 TEST_HEADER include/spdk/nvmf.h 00:02:57.683 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.683 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.683 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.683 TEST_HEADER include/spdk/opal_spec.h 00:02:57.683 TEST_HEADER include/spdk/pci_ids.h 00:02:57.683 TEST_HEADER include/spdk/opal.h 00:02:57.683 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.683 TEST_HEADER include/spdk/pipe.h 00:02:57.683 TEST_HEADER include/spdk/queue.h 00:02:57.683 TEST_HEADER include/spdk/reduce.h 00:02:57.683 TEST_HEADER include/spdk/rpc.h 00:02:57.683 TEST_HEADER include/spdk/scheduler.h 00:02:57.683 TEST_HEADER include/spdk/scsi.h 00:02:57.683 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.683 TEST_HEADER include/spdk/sock.h 00:02:57.945 TEST_HEADER include/spdk/string.h 00:02:57.945 TEST_HEADER include/spdk/stdinc.h 00:02:57.945 TEST_HEADER include/spdk/trace.h 00:02:57.945 TEST_HEADER include/spdk/trace_parser.h 00:02:57.945 TEST_HEADER include/spdk/thread.h 00:02:57.945 TEST_HEADER include/spdk/util.h 00:02:57.945 TEST_HEADER include/spdk/ublk.h 00:02:57.945 TEST_HEADER include/spdk/uuid.h 00:02:57.945 TEST_HEADER include/spdk/tree.h 00:02:57.945 TEST_HEADER include/spdk/version.h 00:02:57.945 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:57.945 TEST_HEADER include/spdk/vhost.h 00:02:57.945 TEST_HEADER include/spdk/xor.h 00:02:57.945 TEST_HEADER include/spdk/zipf.h 00:02:57.945 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:57.945 TEST_HEADER include/spdk/vmd.h 00:02:57.945 CXX test/cpp_headers/accel.o 00:02:57.945 CXX test/cpp_headers/accel_module.o 00:02:57.945 CXX test/cpp_headers/barrier.o 00:02:57.945 CXX test/cpp_headers/base64.o 00:02:57.945 CXX test/cpp_headers/assert.o 00:02:57.945 CXX test/cpp_headers/bdev.o 00:02:57.945 CXX test/cpp_headers/bit_array.o 00:02:57.945 CXX test/cpp_headers/bit_pool.o 00:02:57.945 CXX test/cpp_headers/bdev_module.o 00:02:57.945 CXX test/cpp_headers/bdev_zone.o 00:02:57.945 CXX test/cpp_headers/blob_bdev.o 00:02:57.945 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:57.945 CXX test/cpp_headers/conf.o 00:02:57.945 CXX test/cpp_headers/config.o 00:02:57.945 CXX test/cpp_headers/blobfs_bdev.o 00:02:57.945 CXX test/cpp_headers/blobfs.o 00:02:57.945 CXX test/cpp_headers/blob.o 00:02:57.945 CXX test/cpp_headers/crc32.o 00:02:57.945 CXX test/cpp_headers/cpuset.o 00:02:57.945 CXX test/cpp_headers/crc64.o 00:02:57.945 CXX test/cpp_headers/dif.o 00:02:57.945 CXX test/cpp_headers/dma.o 00:02:57.945 CXX test/cpp_headers/crc16.o 00:02:57.945 CXX test/cpp_headers/endian.o 00:02:57.945 CXX test/cpp_headers/env_dpdk.o 00:02:57.945 CXX test/cpp_headers/env.o 00:02:57.945 CXX test/cpp_headers/fd.o 00:02:57.945 CC test/app/jsoncat/jsoncat.o 00:02:57.945 CXX test/cpp_headers/fd_group.o 00:02:57.945 CXX test/cpp_headers/file.o 00:02:57.945 CXX test/cpp_headers/event.o 00:02:57.945 CXX test/cpp_headers/fsdev_module.o 00:02:57.945 CXX test/cpp_headers/ftl.o 00:02:57.945 CC test/app/histogram_perf/histogram_perf.o 00:02:57.945 CXX test/cpp_headers/fsdev.o 00:02:57.945 CXX test/cpp_headers/fuse_dispatcher.o 00:02:57.945 CXX test/cpp_headers/hexlify.o 00:02:57.945 CXX test/cpp_headers/gpt_spec.o 00:02:57.945 CXX test/cpp_headers/idxd.o 00:02:57.945 CXX test/cpp_headers/histogram_data.o 00:02:57.945 CXX test/cpp_headers/idxd_spec.o 00:02:57.945 CXX test/cpp_headers/init.o 00:02:57.945 CXX test/cpp_headers/ioat.o 00:02:57.945 CXX test/cpp_headers/ioat_spec.o 00:02:57.945 CXX test/cpp_headers/iscsi_spec.o 00:02:57.945 CXX test/cpp_headers/json.o 00:02:57.945 CXX test/cpp_headers/jsonrpc.o 00:02:57.945 CC examples/ioat/perf/perf.o 00:02:57.945 CXX test/cpp_headers/keyring.o 00:02:57.945 CXX test/cpp_headers/keyring_module.o 00:02:57.945 CC test/thread/poller_perf/poller_perf.o 00:02:57.945 CXX test/cpp_headers/likely.o 00:02:57.945 CXX test/cpp_headers/log.o 00:02:57.945 CXX test/cpp_headers/memory.o 00:02:57.945 CXX test/cpp_headers/lvol.o 00:02:57.945 CXX test/cpp_headers/md5.o 00:02:57.945 CXX test/cpp_headers/nbd.o 00:02:57.945 CXX test/cpp_headers/mmio.o 00:02:57.945 CXX test/cpp_headers/net.o 00:02:57.945 CXX test/cpp_headers/notify.o 00:02:57.945 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.945 CXX test/cpp_headers/nvme.o 00:02:57.945 LINK spdk_lspci 00:02:57.945 CXX test/cpp_headers/nvme_intel.o 00:02:57.945 CXX test/cpp_headers/nvme_ocssd.o 00:02:57.945 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:57.945 CXX test/cpp_headers/nvme_spec.o 00:02:57.945 CC test/thread/lock/spdk_lock.o 00:02:57.945 CC test/env/pci/pci_ut.o 00:02:57.945 CXX test/cpp_headers/nvme_zns.o 00:02:57.945 CC test/env/memory/memory_ut.o 00:02:57.945 CXX test/cpp_headers/nvmf_cmd.o 00:02:57.945 CXX test/cpp_headers/nvmf.o 00:02:57.945 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:57.945 CC examples/util/zipf/zipf.o 00:02:57.945 CXX test/cpp_headers/nvmf_transport.o 00:02:57.945 CXX test/cpp_headers/opal.o 00:02:57.945 CXX test/cpp_headers/nvmf_spec.o 00:02:57.945 CC examples/ioat/verify/verify.o 00:02:57.945 CC test/app/stub/stub.o 00:02:57.945 CXX test/cpp_headers/opal_spec.o 00:02:57.945 CXX test/cpp_headers/pci_ids.o 00:02:57.945 CXX test/cpp_headers/pipe.o 00:02:57.945 CXX test/cpp_headers/queue.o 00:02:57.945 CXX test/cpp_headers/rpc.o 00:02:57.945 CXX test/cpp_headers/scsi.o 00:02:57.945 CXX test/cpp_headers/reduce.o 00:02:57.945 CXX test/cpp_headers/scheduler.o 00:02:57.945 CC test/env/vtophys/vtophys.o 00:02:57.945 CXX test/cpp_headers/sock.o 00:02:57.945 CXX test/cpp_headers/scsi_spec.o 00:02:57.945 CXX test/cpp_headers/stdinc.o 00:02:57.945 CC app/fio/nvme/fio_plugin.o 00:02:57.945 CC test/dma/test_dma/test_dma.o 00:02:57.945 LINK rpc_client_test 00:02:57.945 CXX test/cpp_headers/string.o 00:02:57.945 CC app/fio/bdev/fio_plugin.o 00:02:57.945 CC test/app/bdev_svc/bdev_svc.o 00:02:57.945 CXX test/cpp_headers/thread.o 00:02:57.945 LINK spdk_nvme_discover 00:02:57.945 LINK spdk_trace_record 00:02:57.945 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:57.945 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:57.945 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.945 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.945 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:57.945 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:57.945 LINK jsoncat 00:02:57.945 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.945 LINK histogram_perf 00:02:57.945 CXX test/cpp_headers/trace.o 00:02:57.945 CXX test/cpp_headers/trace_parser.o 00:02:57.945 LINK poller_perf 00:02:57.945 CXX test/cpp_headers/tree.o 00:02:57.945 CXX test/cpp_headers/ublk.o 00:02:57.945 CXX test/cpp_headers/util.o 00:02:57.945 CXX test/cpp_headers/uuid.o 00:02:57.945 CXX test/cpp_headers/version.o 00:02:57.945 LINK nvmf_tgt 00:02:57.945 CXX test/cpp_headers/vfio_user_pci.o 00:02:57.945 LINK iscsi_tgt 00:02:58.205 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.205 LINK vtophys 00:02:58.205 CXX test/cpp_headers/vhost.o 00:02:58.205 CXX test/cpp_headers/vmd.o 00:02:58.205 CXX test/cpp_headers/xor.o 00:02:58.205 CXX test/cpp_headers/zipf.o 00:02:58.205 LINK interrupt_tgt 00:02:58.205 LINK zipf 00:02:58.205 LINK spdk_tgt 00:02:58.205 LINK env_dpdk_post_init 00:02:58.205 LINK stub 00:02:58.205 LINK ioat_perf 00:02:58.205 LINK verify 00:02:58.205 LINK bdev_svc 00:02:58.205 LINK spdk_trace 00:02:58.205 LINK spdk_dd 00:02:58.205 LINK pci_ut 00:02:58.205 LINK test_dma 00:02:58.463 LINK vhost_fuzz 00:02:58.463 LINK nvme_fuzz 00:02:58.463 LINK llvm_vfio_fuzz 00:02:58.463 LINK spdk_nvme_perf 00:02:58.463 LINK spdk_bdev 00:02:58.463 LINK spdk_nvme 00:02:58.463 LINK spdk_nvme_identify 00:02:58.463 LINK mem_callbacks 00:02:58.463 LINK llvm_nvme_fuzz 00:02:58.463 LINK spdk_top 00:02:58.722 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.722 CC examples/vmd/led/led.o 00:02:58.722 CC examples/sock/hello_world/hello_sock.o 00:02:58.722 CC examples/idxd/perf/perf.o 00:02:58.722 CC app/vhost/vhost.o 00:02:58.722 CC examples/thread/thread/thread_ex.o 00:02:58.722 LINK lsvmd 00:02:58.722 LINK led 00:02:58.722 LINK memory_ut 00:02:58.981 LINK vhost 00:02:58.981 LINK idxd_perf 00:02:58.981 LINK hello_sock 00:02:58.981 LINK thread 00:02:58.981 LINK spdk_lock 00:02:58.981 LINK iscsi_fuzz 00:02:59.550 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:59.550 CC examples/nvme/hello_world/hello_world.o 00:02:59.550 CC examples/nvme/arbitration/arbitration.o 00:02:59.550 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.550 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:59.550 CC examples/nvme/abort/abort.o 00:02:59.550 CC examples/nvme/reconnect/reconnect.o 00:02:59.550 CC examples/nvme/hotplug/hotplug.o 00:02:59.550 CC test/event/reactor/reactor.o 00:02:59.550 CC test/event/event_perf/event_perf.o 00:02:59.550 CC test/event/reactor_perf/reactor_perf.o 00:02:59.550 CC test/event/app_repeat/app_repeat.o 00:02:59.550 CC test/event/scheduler/scheduler.o 00:02:59.809 LINK hello_world 00:02:59.809 LINK cmb_copy 00:02:59.809 LINK pmr_persistence 00:02:59.809 LINK reactor 00:02:59.809 LINK event_perf 00:02:59.809 LINK reactor_perf 00:02:59.809 LINK hotplug 00:02:59.809 LINK reconnect 00:02:59.809 LINK abort 00:02:59.809 LINK app_repeat 00:02:59.809 LINK arbitration 00:02:59.809 LINK nvme_manage 00:02:59.809 LINK scheduler 00:02:59.809 CC test/nvme/aer/aer.o 00:02:59.809 CC test/nvme/e2edp/nvme_dp.o 00:03:00.067 CC test/nvme/sgl/sgl.o 00:03:00.067 CC test/accel/dif/dif.o 00:03:00.067 CC test/nvme/boot_partition/boot_partition.o 00:03:00.067 CC test/nvme/reserve/reserve.o 00:03:00.067 CC test/nvme/err_injection/err_injection.o 00:03:00.067 CC test/nvme/connect_stress/connect_stress.o 00:03:00.067 CC test/nvme/startup/startup.o 00:03:00.067 CC test/nvme/fdp/fdp.o 00:03:00.067 CC test/nvme/reset/reset.o 00:03:00.067 CC test/nvme/overhead/overhead.o 00:03:00.067 CC test/nvme/simple_copy/simple_copy.o 00:03:00.067 CC test/nvme/cuse/cuse.o 00:03:00.067 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.067 CC test/nvme/compliance/nvme_compliance.o 00:03:00.067 CC test/blobfs/mkfs/mkfs.o 00:03:00.067 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.067 CC test/lvol/esnap/esnap.o 00:03:00.067 LINK boot_partition 00:03:00.067 LINK startup 00:03:00.067 LINK reserve 00:03:00.067 LINK connect_stress 00:03:00.067 LINK err_injection 00:03:00.067 LINK doorbell_aers 00:03:00.067 LINK fused_ordering 00:03:00.067 LINK simple_copy 00:03:00.067 LINK aer 00:03:00.067 LINK nvme_dp 00:03:00.067 LINK sgl 00:03:00.067 LINK reset 00:03:00.067 LINK mkfs 00:03:00.067 LINK fdp 00:03:00.067 LINK overhead 00:03:00.326 LINK nvme_compliance 00:03:00.326 LINK dif 00:03:00.585 CC examples/accel/perf/accel_perf.o 00:03:00.585 CC examples/blob/hello_world/hello_blob.o 00:03:00.585 CC examples/blob/cli/blobcli.o 00:03:00.585 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:00.844 LINK hello_blob 00:03:00.844 LINK hello_fsdev 00:03:00.844 LINK cuse 00:03:00.844 LINK accel_perf 00:03:00.844 LINK blobcli 00:03:01.782 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.782 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.782 LINK hello_bdev 00:03:02.041 CC test/bdev/bdevio/bdevio.o 00:03:02.041 LINK bdevperf 00:03:02.301 LINK bdevio 00:03:03.683 LINK esnap 00:03:03.683 CC examples/nvmf/nvmf/nvmf.o 00:03:03.683 LINK nvmf 00:03:05.063 00:03:05.063 real 0m45.373s 00:03:05.063 user 6m15.867s 00:03:05.063 sys 2m30.728s 00:03:05.063 20:05:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.063 20:05:17 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.063 ************************************ 00:03:05.063 END TEST make 00:03:05.063 ************************************ 00:03:05.063 20:05:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.063 20:05:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.063 20:05:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.063 20:05:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.063 20:05:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.063 20:05:17 -- pm/common@44 -- $ pid=1481575 00:03:05.063 20:05:17 -- pm/common@50 -- $ kill -TERM 1481575 00:03:05.063 20:05:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.063 20:05:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.063 20:05:17 -- pm/common@44 -- $ pid=1481576 00:03:05.063 20:05:17 -- pm/common@50 -- $ kill -TERM 1481576 00:03:05.063 20:05:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.063 20:05:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:05.063 20:05:17 -- pm/common@44 -- $ pid=1481578 00:03:05.063 20:05:17 -- pm/common@50 -- $ kill -TERM 1481578 00:03:05.063 20:05:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.063 20:05:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.063 20:05:17 -- pm/common@44 -- $ pid=1481602 00:03:05.063 20:05:17 -- pm/common@50 -- $ sudo -E kill -TERM 1481602 00:03:05.063 20:05:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:05.063 20:05:17 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:05.063 20:05:17 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:05.063 20:05:17 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:05.063 20:05:17 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:05.063 20:05:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:05.063 20:05:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:05.063 20:05:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:05.063 20:05:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:05.063 20:05:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:05.063 20:05:17 -- scripts/common.sh@336 -- # read -ra ver1 00:03:05.063 20:05:17 -- scripts/common.sh@337 -- # IFS=.-: 00:03:05.063 20:05:17 -- scripts/common.sh@337 -- # read -ra ver2 00:03:05.063 20:05:17 -- scripts/common.sh@338 -- # local 'op=<' 00:03:05.063 20:05:17 -- scripts/common.sh@340 -- # ver1_l=2 00:03:05.063 20:05:17 -- scripts/common.sh@341 -- # ver2_l=1 00:03:05.063 20:05:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:05.063 20:05:17 -- scripts/common.sh@344 -- # case "$op" in 00:03:05.063 20:05:17 -- scripts/common.sh@345 -- # : 1 00:03:05.063 20:05:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:05.063 20:05:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.063 20:05:17 -- scripts/common.sh@365 -- # decimal 1 00:03:05.063 20:05:17 -- scripts/common.sh@353 -- # local d=1 00:03:05.063 20:05:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:05.063 20:05:17 -- scripts/common.sh@355 -- # echo 1 00:03:05.063 20:05:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:05.063 20:05:17 -- scripts/common.sh@366 -- # decimal 2 00:03:05.063 20:05:17 -- scripts/common.sh@353 -- # local d=2 00:03:05.063 20:05:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:05.063 20:05:17 -- scripts/common.sh@355 -- # echo 2 00:03:05.063 20:05:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:05.063 20:05:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:05.063 20:05:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:05.064 20:05:17 -- scripts/common.sh@368 -- # return 0 00:03:05.064 20:05:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:05.064 20:05:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:05.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.064 --rc genhtml_branch_coverage=1 00:03:05.064 --rc genhtml_function_coverage=1 00:03:05.064 --rc genhtml_legend=1 00:03:05.064 --rc geninfo_all_blocks=1 00:03:05.064 --rc geninfo_unexecuted_blocks=1 00:03:05.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.064 ' 00:03:05.064 20:05:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:05.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.064 --rc genhtml_branch_coverage=1 00:03:05.064 --rc genhtml_function_coverage=1 00:03:05.064 --rc genhtml_legend=1 00:03:05.064 --rc geninfo_all_blocks=1 00:03:05.064 --rc geninfo_unexecuted_blocks=1 00:03:05.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.064 ' 00:03:05.064 20:05:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:05.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.064 --rc genhtml_branch_coverage=1 00:03:05.064 --rc genhtml_function_coverage=1 00:03:05.064 --rc genhtml_legend=1 00:03:05.064 --rc geninfo_all_blocks=1 00:03:05.064 --rc geninfo_unexecuted_blocks=1 00:03:05.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.064 ' 00:03:05.064 20:05:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:05.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.064 --rc genhtml_branch_coverage=1 00:03:05.064 --rc genhtml_function_coverage=1 00:03:05.064 --rc genhtml_legend=1 00:03:05.064 --rc geninfo_all_blocks=1 00:03:05.064 --rc geninfo_unexecuted_blocks=1 00:03:05.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.064 ' 00:03:05.064 20:05:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.064 20:05:17 -- nvmf/common.sh@7 -- # uname -s 00:03:05.064 20:05:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.064 20:05:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.064 20:05:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.064 20:05:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.064 20:05:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.064 20:05:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.064 20:05:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.064 20:05:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.064 20:05:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.064 20:05:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.324 20:05:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:05.324 20:05:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:05.324 20:05:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.324 20:05:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.324 20:05:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:05.324 20:05:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.324 20:05:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:05.324 20:05:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:05.324 20:05:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.324 20:05:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.324 20:05:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.324 20:05:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.324 20:05:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.324 20:05:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.324 20:05:18 -- paths/export.sh@5 -- # export PATH 00:03:05.324 20:05:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.324 20:05:18 -- nvmf/common.sh@51 -- # : 0 00:03:05.324 20:05:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:05.324 20:05:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:05.324 20:05:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.324 20:05:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.324 20:05:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.324 20:05:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:05.324 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:05.324 20:05:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:05.324 20:05:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:05.324 20:05:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:05.324 20:05:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.324 20:05:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.324 20:05:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.324 20:05:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.324 20:05:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:05.324 20:05:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.324 20:05:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:05.324 20:05:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.324 20:05:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.324 20:05:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.324 20:05:18 -- spdk/autotest.sh@48 -- # udevadm_pid=1544655 00:03:05.324 20:05:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.324 20:05:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.324 20:05:18 -- pm/common@17 -- # local monitor 00:03:05.324 20:05:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.324 20:05:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.324 20:05:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.324 20:05:18 -- pm/common@21 -- # date +%s 00:03:05.324 20:05:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.324 20:05:18 -- pm/common@21 -- # date +%s 00:03:05.324 20:05:18 -- pm/common@25 -- # sleep 1 00:03:05.324 20:05:18 -- pm/common@21 -- # date +%s 00:03:05.324 20:05:18 -- pm/common@21 -- # date +%s 00:03:05.324 20:05:18 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732647918 00:03:05.324 20:05:18 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732647918 00:03:05.324 20:05:18 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732647918 00:03:05.324 20:05:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732647918 00:03:05.324 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732647918_collect-vmstat.pm.log 00:03:05.324 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732647918_collect-cpu-load.pm.log 00:03:05.324 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732647918_collect-cpu-temp.pm.log 00:03:05.324 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732647918_collect-bmc-pm.bmc.pm.log 00:03:06.263 20:05:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.263 20:05:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.263 20:05:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:06.263 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:03:06.263 20:05:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.263 20:05:19 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:06.263 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:03:06.263 20:05:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:06.263 20:05:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:06.263 20:05:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:06.263 20:05:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:06.263 20:05:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:06.263 20:05:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.263 20:05:19 -- common/autotest_common.sh@1457 -- # uname 00:03:06.263 20:05:19 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:06.263 20:05:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.263 20:05:19 -- common/autotest_common.sh@1477 -- # uname 00:03:06.263 20:05:19 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:06.263 20:05:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:06.263 20:05:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:03:06.263 lcov: LCOV version 1.15 00:03:06.263 20:05:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:14.378 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:14.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:23.061 20:05:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:23.061 20:05:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.061 20:05:34 -- common/autotest_common.sh@10 -- # set +x 00:03:23.061 20:05:34 -- spdk/autotest.sh@78 -- # rm -f 00:03:23.061 20:05:34 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.962 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:24.962 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:25.220 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:25.220 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:25.220 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:25.220 20:05:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:25.220 20:05:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:25.220 20:05:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:25.220 20:05:37 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:25.220 20:05:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:25.220 20:05:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:25.220 20:05:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:25.220 20:05:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.220 20:05:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:25.220 20:05:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:25.220 20:05:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.220 20:05:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:25.220 20:05:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:25.220 20:05:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:25.220 20:05:37 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:25.220 No valid GPT data, bailing 00:03:25.220 20:05:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:25.220 20:05:38 -- scripts/common.sh@394 -- # pt= 00:03:25.220 20:05:38 -- scripts/common.sh@395 -- # return 1 00:03:25.220 20:05:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:25.220 1+0 records in 00:03:25.220 1+0 records out 00:03:25.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00333487 s, 314 MB/s 00:03:25.220 20:05:38 -- spdk/autotest.sh@105 -- # sync 00:03:25.220 20:05:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:25.220 20:05:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:25.220 20:05:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:33.327 20:05:45 -- spdk/autotest.sh@111 -- # uname -s 00:03:33.327 20:05:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:33.327 20:05:45 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:33.327 20:05:45 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:33.327 20:05:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.327 20:05:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.327 20:05:45 -- common/autotest_common.sh@10 -- # set +x 00:03:33.327 ************************************ 00:03:33.327 START TEST setup.sh 00:03:33.328 ************************************ 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:33.328 * Looking for test storage... 00:03:33.328 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.328 20:05:45 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:33.328 20:05:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:33.328 20:05:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.328 20:05:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.328 ************************************ 00:03:33.328 START TEST acl 00:03:33.328 ************************************ 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:33.328 * Looking for test storage... 00:03:33.328 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.328 20:05:45 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.328 --rc genhtml_branch_coverage=1 00:03:33.328 --rc genhtml_function_coverage=1 00:03:33.328 --rc genhtml_legend=1 00:03:33.328 --rc geninfo_all_blocks=1 00:03:33.328 --rc geninfo_unexecuted_blocks=1 00:03:33.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:33.328 ' 00:03:33.328 20:05:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:33.328 20:05:45 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.329 20:05:45 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:33.329 20:05:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:33.329 20:05:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:33.329 20:05:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:33.329 20:05:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:33.329 20:05:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:33.329 20:05:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.329 20:05:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.511 20:05:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:37.511 20:05:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:37.511 20:05:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.511 20:05:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:37.511 20:05:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.511 20:05:49 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:40.040 Hugepages 00:03:40.040 node hugesize free / total 00:03:40.040 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.040 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.040 20:05:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.040 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.040 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.040 20:05:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.299 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.299 20:05:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 00:03:40.299 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.299 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.299 20:05:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.299 20:05:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:40.299 20:05:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:40.299 20:05:53 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.299 20:05:53 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.299 20:05:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.558 ************************************ 00:03:40.558 START TEST denied 00:03:40.558 ************************************ 00:03:40.558 20:05:53 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:40.558 20:05:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:40.558 20:05:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:40.558 20:05:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:40.558 20:05:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.558 20:05:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:44.741 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.741 20:05:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.920 00:03:48.920 real 0m8.286s 00:03:48.920 user 0m2.626s 00:03:48.920 sys 0m5.063s 00:03:48.920 20:06:01 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.920 20:06:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:48.920 ************************************ 00:03:48.920 END TEST denied 00:03:48.920 ************************************ 00:03:48.920 20:06:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:48.920 20:06:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.921 20:06:01 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.921 20:06:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.921 ************************************ 00:03:48.921 START TEST allowed 00:03:48.921 ************************************ 00:03:48.921 20:06:01 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:48.921 20:06:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:48.921 20:06:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:48.921 20:06:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:48.921 20:06:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.921 20:06:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.184 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.184 20:06:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:54.184 20:06:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:54.184 20:06:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:54.184 20:06:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.184 20:06:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.471 00:03:57.471 real 0m8.182s 00:03:57.471 user 0m2.207s 00:03:57.471 sys 0m4.545s 00:03:57.471 20:06:09 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.471 20:06:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:57.471 ************************************ 00:03:57.471 END TEST allowed 00:03:57.471 ************************************ 00:03:57.471 00:03:57.471 real 0m24.192s 00:03:57.471 user 0m7.638s 00:03:57.471 sys 0m14.820s 00:03:57.471 20:06:09 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.471 20:06:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.471 ************************************ 00:03:57.471 END TEST acl 00:03:57.471 ************************************ 00:03:57.471 20:06:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:57.471 20:06:09 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.471 20:06:09 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.471 20:06:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.471 ************************************ 00:03:57.471 START TEST hugepages 00:03:57.471 ************************************ 00:03:57.471 20:06:09 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:57.471 * Looking for test storage... 00:03:57.471 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:57.471 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.471 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.472 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.472 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.472 20:06:10 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:57.472 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.472 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.472 --rc genhtml_branch_coverage=1 00:03:57.472 --rc genhtml_function_coverage=1 00:03:57.472 --rc genhtml_legend=1 00:03:57.472 --rc geninfo_all_blocks=1 00:03:57.472 --rc geninfo_unexecuted_blocks=1 00:03:57.472 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:57.472 ' 00:03:57.472 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.472 --rc genhtml_branch_coverage=1 00:03:57.472 --rc genhtml_function_coverage=1 00:03:57.472 --rc genhtml_legend=1 00:03:57.472 --rc geninfo_all_blocks=1 00:03:57.472 --rc geninfo_unexecuted_blocks=1 00:03:57.472 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:57.472 ' 00:03:57.472 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.472 --rc genhtml_branch_coverage=1 00:03:57.472 --rc genhtml_function_coverage=1 00:03:57.472 --rc genhtml_legend=1 00:03:57.472 --rc geninfo_all_blocks=1 00:03:57.472 --rc geninfo_unexecuted_blocks=1 00:03:57.472 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:57.472 ' 00:03:57.472 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.472 --rc genhtml_branch_coverage=1 00:03:57.472 --rc genhtml_function_coverage=1 00:03:57.472 --rc genhtml_legend=1 00:03:57.472 --rc geninfo_all_blocks=1 00:03:57.472 --rc geninfo_unexecuted_blocks=1 00:03:57.472 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:57.472 ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 40937352 kB' 'MemAvailable: 42660840 kB' 'Buffers: 4380 kB' 'Cached: 10377712 kB' 'SwapCached: 76 kB' 'Active: 6661900 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782012 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618544 kB' 'Mapped: 188604 kB' 'Shmem: 8579192 kB' 'KReclaimable: 568236 kB' 'Slab: 1557488 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 989252 kB' 'KernelStack: 21952 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433348 kB' 'Committed_AS: 10488168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217956 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.472 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.473 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:57.474 20:06:10 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:57.474 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.474 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.474 20:06:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.474 ************************************ 00:03:57.474 START TEST single_node_setup 00:03:57.474 ************************************ 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.474 20:06:10 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:00.756 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.756 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.015 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.015 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.397 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43121824 kB' 'MemAvailable: 44845312 kB' 'Buffers: 4380 kB' 'Cached: 10377852 kB' 'SwapCached: 76 kB' 'Active: 6661208 kB' 'Inactive: 4334596 kB' 'Active(anon): 5781320 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616820 kB' 'Mapped: 188712 kB' 'Shmem: 8579332 kB' 'KReclaimable: 568236 kB' 'Slab: 1556056 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987820 kB' 'KernelStack: 22016 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10488940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218084 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.397 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.398 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.759 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43134160 kB' 'MemAvailable: 44857648 kB' 'Buffers: 4380 kB' 'Cached: 10377856 kB' 'SwapCached: 76 kB' 'Active: 6660424 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780536 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616564 kB' 'Mapped: 188712 kB' 'Shmem: 8579336 kB' 'KReclaimable: 568236 kB' 'Slab: 1556176 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987940 kB' 'KernelStack: 21952 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10488960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218148 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.760 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.761 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.762 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43135932 kB' 'MemAvailable: 44859420 kB' 'Buffers: 4380 kB' 'Cached: 10377872 kB' 'SwapCached: 76 kB' 'Active: 6661024 kB' 'Inactive: 4334596 kB' 'Active(anon): 5781136 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616584 kB' 'Mapped: 188712 kB' 'Shmem: 8579352 kB' 'KReclaimable: 568236 kB' 'Slab: 1556280 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988044 kB' 'KernelStack: 21904 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10487480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218148 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.763 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.764 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:02.765 nr_hugepages=1024 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:02.765 resv_hugepages=0 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:02.765 surplus_hugepages=0 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:02.765 anon_hugepages=0 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:02.765 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43136804 kB' 'MemAvailable: 44860292 kB' 'Buffers: 4380 kB' 'Cached: 10377872 kB' 'SwapCached: 76 kB' 'Active: 6661104 kB' 'Inactive: 4334596 kB' 'Active(anon): 5781216 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616608 kB' 'Mapped: 188712 kB' 'Shmem: 8579352 kB' 'KReclaimable: 568236 kB' 'Slab: 1556280 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988044 kB' 'KernelStack: 21984 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10489004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218100 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23318420 kB' 'MemUsed: 9316016 kB' 'SwapCached: 44 kB' 'Active: 4685620 kB' 'Inactive: 532564 kB' 'Active(anon): 3908192 kB' 'Inactive(anon): 56 kB' 'Active(file): 777428 kB' 'Inactive(file): 532508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4969896 kB' 'Mapped: 120348 kB' 'AnonPages: 251484 kB' 'Shmem: 3659916 kB' 'KernelStack: 10520 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394384 kB' 'Slab: 871628 kB' 'SReclaimable: 394384 kB' 'SUnreclaim: 477244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:02.770 node0=1024 expecting 1024 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.770 00:04:02.770 real 0m5.222s 00:04:02.770 user 0m1.376s 00:04:02.770 sys 0m2.433s 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.770 20:06:15 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:02.770 ************************************ 00:04:02.770 END TEST single_node_setup 00:04:02.770 ************************************ 00:04:02.770 20:06:15 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:02.770 20:06:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.770 20:06:15 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.770 20:06:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.770 ************************************ 00:04:02.770 START TEST even_2G_alloc 00:04:02.770 ************************************ 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.770 20:06:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:06.188 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.188 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43190780 kB' 'MemAvailable: 44914268 kB' 'Buffers: 4380 kB' 'Cached: 10378020 kB' 'SwapCached: 76 kB' 'Active: 6660380 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780492 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615796 kB' 'Mapped: 187616 kB' 'Shmem: 8579500 kB' 'KReclaimable: 568236 kB' 'Slab: 1557176 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988940 kB' 'KernelStack: 21904 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.188 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43191684 kB' 'MemAvailable: 44915172 kB' 'Buffers: 4380 kB' 'Cached: 10378024 kB' 'SwapCached: 76 kB' 'Active: 6660560 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780672 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616004 kB' 'Mapped: 187584 kB' 'Shmem: 8579504 kB' 'KReclaimable: 568236 kB' 'Slab: 1557212 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988976 kB' 'KernelStack: 21872 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.189 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43192348 kB' 'MemAvailable: 44915836 kB' 'Buffers: 4380 kB' 'Cached: 10378040 kB' 'SwapCached: 76 kB' 'Active: 6660576 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780688 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616000 kB' 'Mapped: 187584 kB' 'Shmem: 8579520 kB' 'KReclaimable: 568236 kB' 'Slab: 1557212 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988976 kB' 'KernelStack: 21872 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.190 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.461 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:06.462 nr_hugepages=1024 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:06.462 resv_hugepages=0 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:06.462 surplus_hugepages=0 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:06.462 anon_hugepages=0 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43192576 kB' 'MemAvailable: 44916064 kB' 'Buffers: 4380 kB' 'Cached: 10378080 kB' 'SwapCached: 76 kB' 'Active: 6660244 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780356 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615588 kB' 'Mapped: 187584 kB' 'Shmem: 8579560 kB' 'KReclaimable: 568236 kB' 'Slab: 1557212 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988976 kB' 'KernelStack: 21856 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10479812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.462 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.463 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24396788 kB' 'MemUsed: 8237648 kB' 'SwapCached: 44 kB' 'Active: 4685700 kB' 'Inactive: 532564 kB' 'Active(anon): 3908272 kB' 'Inactive(anon): 56 kB' 'Active(file): 777428 kB' 'Inactive(file): 532508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4969932 kB' 'Mapped: 119572 kB' 'AnonPages: 251544 kB' 'Shmem: 3659952 kB' 'KernelStack: 10552 kB' 'PageTables: 4788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394384 kB' 'Slab: 871968 kB' 'SReclaimable: 394384 kB' 'SUnreclaim: 477584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.464 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.466 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 18795916 kB' 'MemUsed: 8853444 kB' 'SwapCached: 32 kB' 'Active: 1974980 kB' 'Inactive: 3802032 kB' 'Active(anon): 1872520 kB' 'Inactive(anon): 3411528 kB' 'Active(file): 102460 kB' 'Inactive(file): 390504 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5412628 kB' 'Mapped: 68012 kB' 'AnonPages: 364480 kB' 'Shmem: 4919632 kB' 'KernelStack: 11320 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173852 kB' 'Slab: 685244 kB' 'SReclaimable: 173852 kB' 'SUnreclaim: 511392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:06.467 node0=512 expecting 512 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:06.467 node1=512 expecting 512 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:06.467 00:04:06.467 real 0m3.698s 00:04:06.467 user 0m1.441s 00:04:06.467 sys 0m2.324s 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.467 20:06:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.467 ************************************ 00:04:06.468 END TEST even_2G_alloc 00:04:06.468 ************************************ 00:04:06.468 20:06:19 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:06.468 20:06:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.468 20:06:19 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.468 20:06:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.468 ************************************ 00:04:06.468 START TEST odd_alloc 00:04:06.468 ************************************ 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.468 20:06:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:09.758 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.758 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43206460 kB' 'MemAvailable: 44929948 kB' 'Buffers: 4380 kB' 'Cached: 10378192 kB' 'SwapCached: 76 kB' 'Active: 6660828 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780940 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616080 kB' 'Mapped: 187616 kB' 'Shmem: 8579672 kB' 'KReclaimable: 568236 kB' 'Slab: 1557256 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 989020 kB' 'KernelStack: 21920 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10480440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.759 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43208804 kB' 'MemAvailable: 44932292 kB' 'Buffers: 4380 kB' 'Cached: 10378196 kB' 'SwapCached: 76 kB' 'Active: 6660496 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780608 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615792 kB' 'Mapped: 187568 kB' 'Shmem: 8579676 kB' 'KReclaimable: 568236 kB' 'Slab: 1557260 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 989024 kB' 'KernelStack: 21904 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10480456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.760 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43208976 kB' 'MemAvailable: 44932464 kB' 'Buffers: 4380 kB' 'Cached: 10378212 kB' 'SwapCached: 76 kB' 'Active: 6660116 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780228 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615344 kB' 'Mapped: 187568 kB' 'Shmem: 8579692 kB' 'KReclaimable: 568236 kB' 'Slab: 1557260 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 989024 kB' 'KernelStack: 21888 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10480476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.761 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.030 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:10.031 nr_hugepages=1025 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:10.031 resv_hugepages=0 00:04:10.031 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:10.032 surplus_hugepages=0 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:10.032 anon_hugepages=0 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43207468 kB' 'MemAvailable: 44930956 kB' 'Buffers: 4380 kB' 'Cached: 10378228 kB' 'SwapCached: 76 kB' 'Active: 6660012 kB' 'Inactive: 4334596 kB' 'Active(anon): 5780124 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615224 kB' 'Mapped: 187568 kB' 'Shmem: 8579708 kB' 'KReclaimable: 568236 kB' 'Slab: 1557260 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 989024 kB' 'KernelStack: 21872 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10492772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.032 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24398848 kB' 'MemUsed: 8235588 kB' 'SwapCached: 44 kB' 'Active: 4683472 kB' 'Inactive: 532564 kB' 'Active(anon): 3906044 kB' 'Inactive(anon): 56 kB' 'Active(file): 777428 kB' 'Inactive(file): 532508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4969948 kB' 'Mapped: 119556 kB' 'AnonPages: 249184 kB' 'Shmem: 3659968 kB' 'KernelStack: 10536 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394384 kB' 'Slab: 871792 kB' 'SReclaimable: 394384 kB' 'SUnreclaim: 477408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.033 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 18807892 kB' 'MemUsed: 8841468 kB' 'SwapCached: 32 kB' 'Active: 1976524 kB' 'Inactive: 3802032 kB' 'Active(anon): 1874064 kB' 'Inactive(anon): 3411528 kB' 'Active(file): 102460 kB' 'Inactive(file): 390504 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5412760 kB' 'Mapped: 68012 kB' 'AnonPages: 365948 kB' 'Shmem: 4919764 kB' 'KernelStack: 11304 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173852 kB' 'Slab: 685468 kB' 'SReclaimable: 173852 kB' 'SUnreclaim: 511616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.034 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:10.035 node0=513 expecting 513 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:10.035 node1=512 expecting 512 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:10.035 00:04:10.035 real 0m3.484s 00:04:10.035 user 0m1.344s 00:04:10.035 sys 0m2.201s 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.035 20:06:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.035 ************************************ 00:04:10.035 END TEST odd_alloc 00:04:10.035 ************************************ 00:04:10.035 20:06:22 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:10.035 20:06:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.035 20:06:22 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.035 20:06:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.035 ************************************ 00:04:10.035 START TEST custom_alloc 00:04:10.035 ************************************ 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:10.035 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.036 20:06:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:13.327 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.327 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42206076 kB' 'MemAvailable: 43929564 kB' 'Buffers: 4380 kB' 'Cached: 10378360 kB' 'SwapCached: 76 kB' 'Active: 6662220 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782332 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617288 kB' 'Mapped: 187580 kB' 'Shmem: 8579840 kB' 'KReclaimable: 568236 kB' 'Slab: 1556712 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988476 kB' 'KernelStack: 22096 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10483392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218420 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42210348 kB' 'MemAvailable: 43933836 kB' 'Buffers: 4380 kB' 'Cached: 10378360 kB' 'SwapCached: 76 kB' 'Active: 6662508 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782620 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617524 kB' 'Mapped: 187520 kB' 'Shmem: 8579840 kB' 'KReclaimable: 568236 kB' 'Slab: 1556712 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988476 kB' 'KernelStack: 22144 kB' 'PageTables: 9356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10483172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218404 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42211632 kB' 'MemAvailable: 43935120 kB' 'Buffers: 4380 kB' 'Cached: 10378368 kB' 'SwapCached: 76 kB' 'Active: 6662788 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782900 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617868 kB' 'Mapped: 187684 kB' 'Shmem: 8579848 kB' 'KReclaimable: 568236 kB' 'Slab: 1556712 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988476 kB' 'KernelStack: 22192 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10483436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218308 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:13.332 nr_hugepages=1536 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:13.332 resv_hugepages=0 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:13.332 surplus_hugepages=0 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:13.332 anon_hugepages=0 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42209620 kB' 'MemAvailable: 43933108 kB' 'Buffers: 4380 kB' 'Cached: 10378396 kB' 'SwapCached: 76 kB' 'Active: 6662036 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782148 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617020 kB' 'Mapped: 187684 kB' 'Shmem: 8579876 kB' 'KReclaimable: 568236 kB' 'Slab: 1556772 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 988536 kB' 'KernelStack: 22208 kB' 'PageTables: 9536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10483592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218340 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.595 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.596 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24433540 kB' 'MemUsed: 8200896 kB' 'SwapCached: 44 kB' 'Active: 4684912 kB' 'Inactive: 532564 kB' 'Active(anon): 3907484 kB' 'Inactive(anon): 56 kB' 'Active(file): 777428 kB' 'Inactive(file): 532508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4970044 kB' 'Mapped: 119560 kB' 'AnonPages: 250512 kB' 'Shmem: 3660064 kB' 'KernelStack: 10584 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394384 kB' 'Slab: 871524 kB' 'SReclaimable: 394384 kB' 'SUnreclaim: 477140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.597 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 17780060 kB' 'MemUsed: 9869300 kB' 'SwapCached: 32 kB' 'Active: 1977424 kB' 'Inactive: 3802032 kB' 'Active(anon): 1874964 kB' 'Inactive(anon): 3411528 kB' 'Active(file): 102460 kB' 'Inactive(file): 390504 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5412820 kB' 'Mapped: 68012 kB' 'AnonPages: 366748 kB' 'Shmem: 4919824 kB' 'KernelStack: 11336 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173852 kB' 'Slab: 685248 kB' 'SReclaimable: 173852 kB' 'SUnreclaim: 511396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.598 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:13.599 node0=512 expecting 512 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:13.599 node1=1024 expecting 1024 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:13.599 00:04:13.599 real 0m3.477s 00:04:13.599 user 0m1.305s 00:04:13.599 sys 0m2.220s 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.599 20:06:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.599 ************************************ 00:04:13.599 END TEST custom_alloc 00:04:13.599 ************************************ 00:04:13.599 20:06:26 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.599 20:06:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.599 20:06:26 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.599 20:06:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.599 ************************************ 00:04:13.599 START TEST no_shrink_alloc 00:04:13.599 ************************************ 00:04:13.599 20:06:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.600 20:06:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:16.891 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.891 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43261044 kB' 'MemAvailable: 44984532 kB' 'Buffers: 4380 kB' 'Cached: 10378544 kB' 'SwapCached: 76 kB' 'Active: 6662720 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782832 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617576 kB' 'Mapped: 187620 kB' 'Shmem: 8580024 kB' 'KReclaimable: 568236 kB' 'Slab: 1556216 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987980 kB' 'KernelStack: 21936 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10481964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218164 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.891 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43262572 kB' 'MemAvailable: 44986060 kB' 'Buffers: 4380 kB' 'Cached: 10378560 kB' 'SwapCached: 76 kB' 'Active: 6662604 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782716 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617464 kB' 'Mapped: 187592 kB' 'Shmem: 8580040 kB' 'KReclaimable: 568236 kB' 'Slab: 1556108 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987872 kB' 'KernelStack: 21888 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10481980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218132 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.892 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43263056 kB' 'MemAvailable: 44986544 kB' 'Buffers: 4380 kB' 'Cached: 10378568 kB' 'SwapCached: 76 kB' 'Active: 6662236 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782348 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617092 kB' 'Mapped: 187592 kB' 'Shmem: 8580048 kB' 'KReclaimable: 568236 kB' 'Slab: 1556124 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987888 kB' 'KernelStack: 21888 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10482004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218116 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.893 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:16.894 nr_hugepages=1024 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:16.894 resv_hugepages=0 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:16.894 surplus_hugepages=0 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:16.894 anon_hugepages=0 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.894 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43263376 kB' 'MemAvailable: 44986864 kB' 'Buffers: 4380 kB' 'Cached: 10378584 kB' 'SwapCached: 76 kB' 'Active: 6661860 kB' 'Inactive: 4334596 kB' 'Active(anon): 5781972 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616660 kB' 'Mapped: 187592 kB' 'Shmem: 8580064 kB' 'KReclaimable: 568236 kB' 'Slab: 1556124 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987888 kB' 'KernelStack: 21872 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10482024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218116 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.155 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.156 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23386132 kB' 'MemUsed: 9248304 kB' 'SwapCached: 44 kB' 'Active: 4684124 kB' 'Inactive: 532564 kB' 'Active(anon): 3906696 kB' 'Inactive(anon): 56 kB' 'Active(file): 777428 kB' 'Inactive(file): 532508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4970200 kB' 'Mapped: 119580 kB' 'AnonPages: 249660 kB' 'Shmem: 3660220 kB' 'KernelStack: 10536 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394384 kB' 'Slab: 871132 kB' 'SReclaimable: 394384 kB' 'SUnreclaim: 476748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.157 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:17.158 node0=1024 expecting 1024 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.158 20:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:20.466 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.466 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.466 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.466 20:06:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.466 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.466 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.466 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.466 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43227376 kB' 'MemAvailable: 44950864 kB' 'Buffers: 4380 kB' 'Cached: 10378684 kB' 'SwapCached: 76 kB' 'Active: 6665136 kB' 'Inactive: 4334596 kB' 'Active(anon): 5785248 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619984 kB' 'Mapped: 187648 kB' 'Shmem: 8580164 kB' 'KReclaimable: 568236 kB' 'Slab: 1556096 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987860 kB' 'KernelStack: 21936 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10496992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218132 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.467 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43230248 kB' 'MemAvailable: 44953736 kB' 'Buffers: 4380 kB' 'Cached: 10378688 kB' 'SwapCached: 76 kB' 'Active: 6662704 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782816 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617416 kB' 'Mapped: 187596 kB' 'Shmem: 8580168 kB' 'KReclaimable: 568236 kB' 'Slab: 1556128 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987892 kB' 'KernelStack: 21840 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10482148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.468 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.469 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43230624 kB' 'MemAvailable: 44954112 kB' 'Buffers: 4380 kB' 'Cached: 10378704 kB' 'SwapCached: 76 kB' 'Active: 6662764 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782876 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616948 kB' 'Mapped: 187596 kB' 'Shmem: 8580184 kB' 'KReclaimable: 568236 kB' 'Slab: 1556128 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987892 kB' 'KernelStack: 21824 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10482172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.470 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.471 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:20.472 nr_hugepages=1024 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:20.472 resv_hugepages=0 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:20.472 surplus_hugepages=0 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:20.472 anon_hugepages=0 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43230624 kB' 'MemAvailable: 44954112 kB' 'Buffers: 4380 kB' 'Cached: 10378728 kB' 'SwapCached: 76 kB' 'Active: 6662524 kB' 'Inactive: 4334596 kB' 'Active(anon): 5782636 kB' 'Inactive(anon): 3411584 kB' 'Active(file): 879888 kB' 'Inactive(file): 923012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8340476 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617192 kB' 'Mapped: 187596 kB' 'Shmem: 8580208 kB' 'KReclaimable: 568236 kB' 'Slab: 1556128 kB' 'SReclaimable: 568236 kB' 'SUnreclaim: 987892 kB' 'KernelStack: 21824 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10482328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.472 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.473 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23372900 kB' 'MemUsed: 9261536 kB' 'SwapCached: 44 kB' 'Active: 4684212 kB' 'Inactive: 532564 kB' 'Active(anon): 3906784 kB' 'Inactive(anon): 56 kB' 'Active(file): 777428 kB' 'Inactive(file): 532508 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4970300 kB' 'Mapped: 119584 kB' 'AnonPages: 249620 kB' 'Shmem: 3660320 kB' 'KernelStack: 10536 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394384 kB' 'Slab: 870876 kB' 'SReclaimable: 394384 kB' 'SUnreclaim: 476492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.474 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:20.475 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:20.476 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:20.476 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:20.476 node0=1024 expecting 1024 00:04:20.476 20:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:20.476 00:04:20.476 real 0m6.724s 00:04:20.476 user 0m2.432s 00:04:20.476 sys 0m4.355s 00:04:20.476 20:06:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.476 20:06:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.476 ************************************ 00:04:20.476 END TEST no_shrink_alloc 00:04:20.476 ************************************ 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:20.476 20:06:33 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:20.476 00:04:20.476 real 0m23.262s 00:04:20.476 user 0m8.189s 00:04:20.476 sys 0m13.946s 00:04:20.476 20:06:33 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.476 20:06:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.476 ************************************ 00:04:20.476 END TEST hugepages 00:04:20.476 ************************************ 00:04:20.476 20:06:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:20.476 20:06:33 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.476 20:06:33 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.476 20:06:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.476 ************************************ 00:04:20.476 START TEST driver 00:04:20.476 ************************************ 00:04:20.476 20:06:33 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:20.476 * Looking for test storage... 00:04:20.476 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:20.476 20:06:33 setup.sh.driver -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.476 20:06:33 setup.sh.driver -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.476 20:06:33 setup.sh.driver -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.734 20:06:33 setup.sh.driver -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.734 20:06:33 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:20.734 20:06:33 setup.sh.driver -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.734 20:06:33 setup.sh.driver -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.734 --rc genhtml_branch_coverage=1 00:04:20.734 --rc genhtml_function_coverage=1 00:04:20.734 --rc genhtml_legend=1 00:04:20.734 --rc geninfo_all_blocks=1 00:04:20.734 --rc geninfo_unexecuted_blocks=1 00:04:20.734 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:20.734 ' 00:04:20.734 20:06:33 setup.sh.driver -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.734 --rc genhtml_branch_coverage=1 00:04:20.734 --rc genhtml_function_coverage=1 00:04:20.734 --rc genhtml_legend=1 00:04:20.734 --rc geninfo_all_blocks=1 00:04:20.735 --rc geninfo_unexecuted_blocks=1 00:04:20.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:20.735 ' 00:04:20.735 20:06:33 setup.sh.driver -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.735 --rc genhtml_branch_coverage=1 00:04:20.735 --rc genhtml_function_coverage=1 00:04:20.735 --rc genhtml_legend=1 00:04:20.735 --rc geninfo_all_blocks=1 00:04:20.735 --rc geninfo_unexecuted_blocks=1 00:04:20.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:20.735 ' 00:04:20.735 20:06:33 setup.sh.driver -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.735 --rc genhtml_branch_coverage=1 00:04:20.735 --rc genhtml_function_coverage=1 00:04:20.735 --rc genhtml_legend=1 00:04:20.735 --rc geninfo_all_blocks=1 00:04:20.735 --rc geninfo_unexecuted_blocks=1 00:04:20.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:20.735 ' 00:04:20.735 20:06:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:20.735 20:06:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.735 20:06:33 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.925 20:06:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:24.925 20:06:37 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.925 20:06:37 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.925 20:06:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.925 ************************************ 00:04:24.925 START TEST guess_driver 00:04:24.925 ************************************ 00:04:24.925 20:06:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:24.925 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:24.926 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.926 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.926 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.926 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.926 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:24.926 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:24.926 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:24.926 Looking for driver=vfio-pci 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.926 20:06:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.213 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.214 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.473 20:06:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.864 20:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.864 20:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.864 20:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.122 20:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:30.123 20:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:30.123 20:06:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.123 20:06:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.393 00:04:35.393 real 0m9.761s 00:04:35.393 user 0m2.571s 00:04:35.393 sys 0m4.944s 00:04:35.393 20:06:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.393 20:06:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:35.393 ************************************ 00:04:35.393 END TEST guess_driver 00:04:35.393 ************************************ 00:04:35.393 00:04:35.393 real 0m14.354s 00:04:35.393 user 0m3.797s 00:04:35.393 sys 0m7.456s 00:04:35.393 20:06:47 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.393 20:06:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:35.393 ************************************ 00:04:35.393 END TEST driver 00:04:35.393 ************************************ 00:04:35.393 20:06:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:35.393 20:06:47 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.393 20:06:47 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.393 20:06:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.393 ************************************ 00:04:35.393 START TEST devices 00:04:35.393 ************************************ 00:04:35.393 20:06:47 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:35.393 * Looking for test storage... 00:04:35.393 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:35.393 20:06:47 setup.sh.devices -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.393 20:06:47 setup.sh.devices -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.393 20:06:47 setup.sh.devices -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.393 20:06:47 setup.sh.devices -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.393 20:06:47 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:35.393 20:06:47 setup.sh.devices -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.393 20:06:47 setup.sh.devices -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.393 --rc genhtml_branch_coverage=1 00:04:35.393 --rc genhtml_function_coverage=1 00:04:35.393 --rc genhtml_legend=1 00:04:35.393 --rc geninfo_all_blocks=1 00:04:35.393 --rc geninfo_unexecuted_blocks=1 00:04:35.393 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:35.394 ' 00:04:35.394 20:06:47 setup.sh.devices -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.394 --rc genhtml_branch_coverage=1 00:04:35.394 --rc genhtml_function_coverage=1 00:04:35.394 --rc genhtml_legend=1 00:04:35.394 --rc geninfo_all_blocks=1 00:04:35.394 --rc geninfo_unexecuted_blocks=1 00:04:35.394 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:35.394 ' 00:04:35.394 20:06:47 setup.sh.devices -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.394 --rc genhtml_branch_coverage=1 00:04:35.394 --rc genhtml_function_coverage=1 00:04:35.394 --rc genhtml_legend=1 00:04:35.394 --rc geninfo_all_blocks=1 00:04:35.394 --rc geninfo_unexecuted_blocks=1 00:04:35.394 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:35.394 ' 00:04:35.394 20:06:47 setup.sh.devices -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.394 --rc genhtml_branch_coverage=1 00:04:35.394 --rc genhtml_function_coverage=1 00:04:35.394 --rc genhtml_legend=1 00:04:35.394 --rc geninfo_all_blocks=1 00:04:35.394 --rc geninfo_unexecuted_blocks=1 00:04:35.394 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:35.394 ' 00:04:35.394 20:06:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:35.394 20:06:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:35.394 20:06:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.394 20:06:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:38.680 20:06:51 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:38.680 No valid GPT data, bailing 00:04:38.680 20:06:51 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:38.680 20:06:51 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:38.680 20:06:51 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:38.680 20:06:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.680 20:06:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.680 ************************************ 00:04:38.680 START TEST nvme_mount 00:04:38.680 ************************************ 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.680 20:06:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:39.618 Creating new GPT entries in memory. 00:04:39.618 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.618 other utilities. 00:04:39.618 20:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.618 20:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.618 20:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.618 20:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.618 20:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.995 Creating new GPT entries in memory. 00:04:40.995 The operation has completed successfully. 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1576773 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.995 20:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.528 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:43.529 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.788 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.788 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.047 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:44.047 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:44.047 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:44.047 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.047 20:06:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:47.334 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.335 20:06:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.335 20:07:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.626 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.626 00:04:50.626 real 0m11.957s 00:04:50.626 user 0m3.307s 00:04:50.626 sys 0m6.473s 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.626 20:07:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.626 ************************************ 00:04:50.626 END TEST nvme_mount 00:04:50.626 ************************************ 00:04:50.626 20:07:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:50.627 20:07:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.627 20:07:03 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.627 20:07:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.627 ************************************ 00:04:50.627 START TEST dm_mount 00:04:50.627 ************************************ 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.627 20:07:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:52.005 Creating new GPT entries in memory. 00:04:52.005 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:52.005 other utilities. 00:04:52.005 20:07:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:52.005 20:07:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.005 20:07:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.005 20:07:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.005 20:07:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:52.943 Creating new GPT entries in memory. 00:04:52.943 The operation has completed successfully. 00:04:52.943 20:07:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:52.943 20:07:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.943 20:07:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.943 20:07:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.943 20:07:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:53.880 The operation has completed successfully. 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1581179 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.880 20:07:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:57.264 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.265 20:07:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.551 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.551 00:05:00.551 real 0m9.919s 00:05:00.551 user 0m2.313s 00:05:00.551 sys 0m4.672s 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.551 20:07:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:00.551 ************************************ 00:05:00.551 END TEST dm_mount 00:05:00.551 ************************************ 00:05:00.551 20:07:13 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:00.552 20:07:13 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.552 20:07:13 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.552 20:07:13 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.552 20:07:13 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.552 20:07:13 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.552 20:07:13 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.810 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:00.810 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:00.810 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:00.810 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:00.810 20:07:13 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:00.810 20:07:13 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:00.810 20:07:13 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.810 20:07:13 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.810 20:07:13 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.810 20:07:13 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.810 20:07:13 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.068 00:05:01.068 real 0m26.053s 00:05:01.068 user 0m7.016s 00:05:01.068 sys 0m13.788s 00:05:01.068 20:07:13 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.068 20:07:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.068 ************************************ 00:05:01.068 END TEST devices 00:05:01.068 ************************************ 00:05:01.068 00:05:01.068 real 1m28.353s 00:05:01.068 user 0m26.855s 00:05:01.068 sys 0m50.326s 00:05:01.068 20:07:13 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.068 20:07:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.068 ************************************ 00:05:01.068 END TEST setup.sh 00:05:01.068 ************************************ 00:05:01.068 20:07:13 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:04.351 Hugepages 00:05:04.351 node hugesize free / total 00:05:04.351 node0 1048576kB 0 / 0 00:05:04.351 node0 2048kB 1024 / 1024 00:05:04.351 node1 1048576kB 0 / 0 00:05:04.351 node1 2048kB 1024 / 1024 00:05:04.351 00:05:04.351 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.351 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:04.351 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:04.351 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:04.351 20:07:17 -- spdk/autotest.sh@117 -- # uname -s 00:05:04.351 20:07:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:04.351 20:07:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:04.351 20:07:17 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:07.625 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.625 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:09.000 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:09.000 20:07:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:09.935 20:07:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:09.935 20:07:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:09.935 20:07:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.935 20:07:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:09.935 20:07:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:09.935 20:07:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:09.935 20:07:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.935 20:07:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.935 20:07:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:10.194 20:07:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:10.194 20:07:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:10.194 20:07:22 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.475 Waiting for block devices as requested 00:05:13.475 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:13.475 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:13.733 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:13.733 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:13.733 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:13.992 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:13.992 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:13.992 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:14.251 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:14.251 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:14.510 20:07:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:14.510 20:07:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:14.510 20:07:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:14.510 20:07:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:14.510 20:07:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:14.510 20:07:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:14.510 20:07:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:14.510 20:07:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:14.510 20:07:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:14.510 20:07:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:14.510 20:07:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:14.510 20:07:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:14.510 20:07:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:14.510 20:07:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:14.510 20:07:27 -- common/autotest_common.sh@1543 -- # continue 00:05:14.510 20:07:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:14.510 20:07:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.510 20:07:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.510 20:07:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:14.510 20:07:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.510 20:07:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.510 20:07:27 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:17.794 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.794 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:18.052 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:18.052 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:18.052 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:18.052 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:18.052 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:18.052 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:18.052 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:19.427 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:19.685 20:07:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:19.685 20:07:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.685 20:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.685 20:07:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:19.685 20:07:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:19.685 20:07:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:19.685 20:07:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:19.685 20:07:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:19.685 20:07:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:19.685 20:07:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:19.685 20:07:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:19.685 20:07:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:19.685 20:07:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:19.685 20:07:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.685 20:07:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:19.685 20:07:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:19.685 20:07:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:19.685 20:07:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:19.685 20:07:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:19.685 20:07:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:19.685 20:07:32 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:19.685 20:07:32 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:19.685 20:07:32 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:19.685 20:07:32 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:19.685 20:07:32 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:19.685 20:07:32 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:19.685 20:07:32 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1590732 00:05:19.685 20:07:32 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.685 20:07:32 -- common/autotest_common.sh@1585 -- # waitforlisten 1590732 00:05:19.685 20:07:32 -- common/autotest_common.sh@835 -- # '[' -z 1590732 ']' 00:05:19.685 20:07:32 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.685 20:07:32 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.685 20:07:32 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.685 20:07:32 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.944 20:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.944 [2024-11-26 20:07:32.637484] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:19.944 [2024-11-26 20:07:32.637553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590732 ] 00:05:19.944 [2024-11-26 20:07:32.709054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.944 [2024-11-26 20:07:32.754687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.202 20:07:32 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.202 20:07:32 -- common/autotest_common.sh@868 -- # return 0 00:05:20.202 20:07:32 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:20.202 20:07:32 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:20.202 20:07:32 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:23.485 nvme0n1 00:05:23.485 20:07:35 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:23.485 [2024-11-26 20:07:36.167405] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:23.485 request: 00:05:23.485 { 00:05:23.485 "nvme_ctrlr_name": "nvme0", 00:05:23.485 "password": "test", 00:05:23.485 "method": "bdev_nvme_opal_revert", 00:05:23.485 "req_id": 1 00:05:23.485 } 00:05:23.485 Got JSON-RPC error response 00:05:23.485 response: 00:05:23.485 { 00:05:23.485 "code": -32602, 00:05:23.485 "message": "Invalid parameters" 00:05:23.485 } 00:05:23.485 20:07:36 -- common/autotest_common.sh@1591 -- # true 00:05:23.485 20:07:36 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:23.485 20:07:36 -- common/autotest_common.sh@1595 -- # killprocess 1590732 00:05:23.485 20:07:36 -- common/autotest_common.sh@954 -- # '[' -z 1590732 ']' 00:05:23.485 20:07:36 -- common/autotest_common.sh@958 -- # kill -0 1590732 00:05:23.485 20:07:36 -- common/autotest_common.sh@959 -- # uname 00:05:23.485 20:07:36 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.485 20:07:36 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1590732 00:05:23.485 20:07:36 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.485 20:07:36 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.485 20:07:36 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1590732' 00:05:23.485 killing process with pid 1590732 00:05:23.485 20:07:36 -- common/autotest_common.sh@973 -- # kill 1590732 00:05:23.485 20:07:36 -- common/autotest_common.sh@978 -- # wait 1590732 00:05:26.014 20:07:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:26.014 20:07:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:26.014 20:07:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.014 20:07:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.014 20:07:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:26.014 20:07:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.014 20:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:26.014 20:07:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:26.014 20:07:38 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:26.014 20:07:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.014 20:07:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.014 20:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:26.014 ************************************ 00:05:26.014 START TEST env 00:05:26.014 ************************************ 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:26.014 * Looking for test storage... 00:05:26.014 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.014 20:07:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.014 20:07:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.014 20:07:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.014 20:07:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.014 20:07:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.014 20:07:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.014 20:07:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.014 20:07:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.014 20:07:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.014 20:07:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.014 20:07:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.014 20:07:38 env -- scripts/common.sh@344 -- # case "$op" in 00:05:26.014 20:07:38 env -- scripts/common.sh@345 -- # : 1 00:05:26.014 20:07:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.014 20:07:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.014 20:07:38 env -- scripts/common.sh@365 -- # decimal 1 00:05:26.014 20:07:38 env -- scripts/common.sh@353 -- # local d=1 00:05:26.014 20:07:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.014 20:07:38 env -- scripts/common.sh@355 -- # echo 1 00:05:26.014 20:07:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.014 20:07:38 env -- scripts/common.sh@366 -- # decimal 2 00:05:26.014 20:07:38 env -- scripts/common.sh@353 -- # local d=2 00:05:26.014 20:07:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.014 20:07:38 env -- scripts/common.sh@355 -- # echo 2 00:05:26.014 20:07:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.014 20:07:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.014 20:07:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.014 20:07:38 env -- scripts/common.sh@368 -- # return 0 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.014 --rc genhtml_branch_coverage=1 00:05:26.014 --rc genhtml_function_coverage=1 00:05:26.014 --rc genhtml_legend=1 00:05:26.014 --rc geninfo_all_blocks=1 00:05:26.014 --rc geninfo_unexecuted_blocks=1 00:05:26.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.014 ' 00:05:26.014 20:07:38 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.014 --rc genhtml_branch_coverage=1 00:05:26.014 --rc genhtml_function_coverage=1 00:05:26.014 --rc genhtml_legend=1 00:05:26.014 --rc geninfo_all_blocks=1 00:05:26.014 --rc geninfo_unexecuted_blocks=1 00:05:26.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.014 ' 00:05:26.015 20:07:38 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.015 --rc genhtml_branch_coverage=1 00:05:26.015 --rc genhtml_function_coverage=1 00:05:26.015 --rc genhtml_legend=1 00:05:26.015 --rc geninfo_all_blocks=1 00:05:26.015 --rc geninfo_unexecuted_blocks=1 00:05:26.015 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.015 ' 00:05:26.015 20:07:38 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.015 --rc genhtml_branch_coverage=1 00:05:26.015 --rc genhtml_function_coverage=1 00:05:26.015 --rc genhtml_legend=1 00:05:26.015 --rc geninfo_all_blocks=1 00:05:26.015 --rc geninfo_unexecuted_blocks=1 00:05:26.015 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:26.015 ' 00:05:26.015 20:07:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:26.015 20:07:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.015 20:07:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.015 20:07:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.015 ************************************ 00:05:26.015 START TEST env_memory 00:05:26.015 ************************************ 00:05:26.015 20:07:38 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:26.015 00:05:26.015 00:05:26.015 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.015 http://cunit.sourceforge.net/ 00:05:26.015 00:05:26.015 00:05:26.015 Suite: memory 00:05:26.015 Test: alloc and free memory map ...[2024-11-26 20:07:38.695646] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:26.015 passed 00:05:26.015 Test: mem map translation ...[2024-11-26 20:07:38.709504] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:26.015 [2024-11-26 20:07:38.709521] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:26.015 [2024-11-26 20:07:38.709553] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:26.015 [2024-11-26 20:07:38.709562] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:26.015 passed 00:05:26.015 Test: mem map registration ...[2024-11-26 20:07:38.730899] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:26.015 [2024-11-26 20:07:38.730915] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:26.015 passed 00:05:26.015 Test: mem map adjacent registrations ...passed 00:05:26.015 00:05:26.015 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.015 suites 1 1 n/a 0 0 00:05:26.015 tests 4 4 4 0 0 00:05:26.015 asserts 152 152 152 0 n/a 00:05:26.015 00:05:26.015 Elapsed time = 0.088 seconds 00:05:26.015 00:05:26.015 real 0m0.100s 00:05:26.015 user 0m0.090s 00:05:26.015 sys 0m0.009s 00:05:26.015 20:07:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.015 20:07:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:26.015 ************************************ 00:05:26.015 END TEST env_memory 00:05:26.015 ************************************ 00:05:26.015 20:07:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:26.015 20:07:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.015 20:07:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.015 20:07:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.015 ************************************ 00:05:26.015 START TEST env_vtophys 00:05:26.015 ************************************ 00:05:26.015 20:07:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:26.015 EAL: lib.eal log level changed from notice to debug 00:05:26.015 EAL: Detected lcore 0 as core 0 on socket 0 00:05:26.015 EAL: Detected lcore 1 as core 1 on socket 0 00:05:26.015 EAL: Detected lcore 2 as core 2 on socket 0 00:05:26.015 EAL: Detected lcore 3 as core 3 on socket 0 00:05:26.015 EAL: Detected lcore 4 as core 4 on socket 0 00:05:26.015 EAL: Detected lcore 5 as core 5 on socket 0 00:05:26.015 EAL: Detected lcore 6 as core 6 on socket 0 00:05:26.015 EAL: Detected lcore 7 as core 8 on socket 0 00:05:26.015 EAL: Detected lcore 8 as core 9 on socket 0 00:05:26.015 EAL: Detected lcore 9 as core 10 on socket 0 00:05:26.015 EAL: Detected lcore 10 as core 11 on socket 0 00:05:26.015 EAL: Detected lcore 11 as core 12 on socket 0 00:05:26.015 EAL: Detected lcore 12 as core 13 on socket 0 00:05:26.015 EAL: Detected lcore 13 as core 14 on socket 0 00:05:26.015 EAL: Detected lcore 14 as core 16 on socket 0 00:05:26.015 EAL: Detected lcore 15 as core 17 on socket 0 00:05:26.015 EAL: Detected lcore 16 as core 18 on socket 0 00:05:26.015 EAL: Detected lcore 17 as core 19 on socket 0 00:05:26.015 EAL: Detected lcore 18 as core 20 on socket 0 00:05:26.015 EAL: Detected lcore 19 as core 21 on socket 0 00:05:26.015 EAL: Detected lcore 20 as core 22 on socket 0 00:05:26.015 EAL: Detected lcore 21 as core 24 on socket 0 00:05:26.015 EAL: Detected lcore 22 as core 25 on socket 0 00:05:26.015 EAL: Detected lcore 23 as core 26 on socket 0 00:05:26.015 EAL: Detected lcore 24 as core 27 on socket 0 00:05:26.015 EAL: Detected lcore 25 as core 28 on socket 0 00:05:26.015 EAL: Detected lcore 26 as core 29 on socket 0 00:05:26.015 EAL: Detected lcore 27 as core 30 on socket 0 00:05:26.015 EAL: Detected lcore 28 as core 0 on socket 1 00:05:26.015 EAL: Detected lcore 29 as core 1 on socket 1 00:05:26.015 EAL: Detected lcore 30 as core 2 on socket 1 00:05:26.015 EAL: Detected lcore 31 as core 3 on socket 1 00:05:26.015 EAL: Detected lcore 32 as core 4 on socket 1 00:05:26.015 EAL: Detected lcore 33 as core 5 on socket 1 00:05:26.015 EAL: Detected lcore 34 as core 6 on socket 1 00:05:26.015 EAL: Detected lcore 35 as core 8 on socket 1 00:05:26.015 EAL: Detected lcore 36 as core 9 on socket 1 00:05:26.015 EAL: Detected lcore 37 as core 10 on socket 1 00:05:26.015 EAL: Detected lcore 38 as core 11 on socket 1 00:05:26.015 EAL: Detected lcore 39 as core 12 on socket 1 00:05:26.015 EAL: Detected lcore 40 as core 13 on socket 1 00:05:26.015 EAL: Detected lcore 41 as core 14 on socket 1 00:05:26.015 EAL: Detected lcore 42 as core 16 on socket 1 00:05:26.015 EAL: Detected lcore 43 as core 17 on socket 1 00:05:26.015 EAL: Detected lcore 44 as core 18 on socket 1 00:05:26.015 EAL: Detected lcore 45 as core 19 on socket 1 00:05:26.015 EAL: Detected lcore 46 as core 20 on socket 1 00:05:26.015 EAL: Detected lcore 47 as core 21 on socket 1 00:05:26.015 EAL: Detected lcore 48 as core 22 on socket 1 00:05:26.015 EAL: Detected lcore 49 as core 24 on socket 1 00:05:26.015 EAL: Detected lcore 50 as core 25 on socket 1 00:05:26.015 EAL: Detected lcore 51 as core 26 on socket 1 00:05:26.015 EAL: Detected lcore 52 as core 27 on socket 1 00:05:26.015 EAL: Detected lcore 53 as core 28 on socket 1 00:05:26.015 EAL: Detected lcore 54 as core 29 on socket 1 00:05:26.015 EAL: Detected lcore 55 as core 30 on socket 1 00:05:26.015 EAL: Detected lcore 56 as core 0 on socket 0 00:05:26.015 EAL: Detected lcore 57 as core 1 on socket 0 00:05:26.015 EAL: Detected lcore 58 as core 2 on socket 0 00:05:26.015 EAL: Detected lcore 59 as core 3 on socket 0 00:05:26.015 EAL: Detected lcore 60 as core 4 on socket 0 00:05:26.015 EAL: Detected lcore 61 as core 5 on socket 0 00:05:26.015 EAL: Detected lcore 62 as core 6 on socket 0 00:05:26.015 EAL: Detected lcore 63 as core 8 on socket 0 00:05:26.015 EAL: Detected lcore 64 as core 9 on socket 0 00:05:26.015 EAL: Detected lcore 65 as core 10 on socket 0 00:05:26.015 EAL: Detected lcore 66 as core 11 on socket 0 00:05:26.015 EAL: Detected lcore 67 as core 12 on socket 0 00:05:26.015 EAL: Detected lcore 68 as core 13 on socket 0 00:05:26.015 EAL: Detected lcore 69 as core 14 on socket 0 00:05:26.015 EAL: Detected lcore 70 as core 16 on socket 0 00:05:26.015 EAL: Detected lcore 71 as core 17 on socket 0 00:05:26.015 EAL: Detected lcore 72 as core 18 on socket 0 00:05:26.015 EAL: Detected lcore 73 as core 19 on socket 0 00:05:26.015 EAL: Detected lcore 74 as core 20 on socket 0 00:05:26.015 EAL: Detected lcore 75 as core 21 on socket 0 00:05:26.015 EAL: Detected lcore 76 as core 22 on socket 0 00:05:26.015 EAL: Detected lcore 77 as core 24 on socket 0 00:05:26.015 EAL: Detected lcore 78 as core 25 on socket 0 00:05:26.015 EAL: Detected lcore 79 as core 26 on socket 0 00:05:26.015 EAL: Detected lcore 80 as core 27 on socket 0 00:05:26.015 EAL: Detected lcore 81 as core 28 on socket 0 00:05:26.015 EAL: Detected lcore 82 as core 29 on socket 0 00:05:26.015 EAL: Detected lcore 83 as core 30 on socket 0 00:05:26.015 EAL: Detected lcore 84 as core 0 on socket 1 00:05:26.015 EAL: Detected lcore 85 as core 1 on socket 1 00:05:26.015 EAL: Detected lcore 86 as core 2 on socket 1 00:05:26.015 EAL: Detected lcore 87 as core 3 on socket 1 00:05:26.015 EAL: Detected lcore 88 as core 4 on socket 1 00:05:26.015 EAL: Detected lcore 89 as core 5 on socket 1 00:05:26.015 EAL: Detected lcore 90 as core 6 on socket 1 00:05:26.015 EAL: Detected lcore 91 as core 8 on socket 1 00:05:26.015 EAL: Detected lcore 92 as core 9 on socket 1 00:05:26.015 EAL: Detected lcore 93 as core 10 on socket 1 00:05:26.016 EAL: Detected lcore 94 as core 11 on socket 1 00:05:26.016 EAL: Detected lcore 95 as core 12 on socket 1 00:05:26.016 EAL: Detected lcore 96 as core 13 on socket 1 00:05:26.016 EAL: Detected lcore 97 as core 14 on socket 1 00:05:26.016 EAL: Detected lcore 98 as core 16 on socket 1 00:05:26.016 EAL: Detected lcore 99 as core 17 on socket 1 00:05:26.016 EAL: Detected lcore 100 as core 18 on socket 1 00:05:26.016 EAL: Detected lcore 101 as core 19 on socket 1 00:05:26.016 EAL: Detected lcore 102 as core 20 on socket 1 00:05:26.016 EAL: Detected lcore 103 as core 21 on socket 1 00:05:26.016 EAL: Detected lcore 104 as core 22 on socket 1 00:05:26.016 EAL: Detected lcore 105 as core 24 on socket 1 00:05:26.016 EAL: Detected lcore 106 as core 25 on socket 1 00:05:26.016 EAL: Detected lcore 107 as core 26 on socket 1 00:05:26.016 EAL: Detected lcore 108 as core 27 on socket 1 00:05:26.016 EAL: Detected lcore 109 as core 28 on socket 1 00:05:26.016 EAL: Detected lcore 110 as core 29 on socket 1 00:05:26.016 EAL: Detected lcore 111 as core 30 on socket 1 00:05:26.016 EAL: Maximum logical cores by configuration: 128 00:05:26.016 EAL: Detected CPU lcores: 112 00:05:26.016 EAL: Detected NUMA nodes: 2 00:05:26.016 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:26.016 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:26.016 EAL: Checking presence of .so 'librte_eal.so' 00:05:26.016 EAL: Detected static linkage of DPDK 00:05:26.016 EAL: No shared files mode enabled, IPC will be disabled 00:05:26.016 EAL: Bus pci wants IOVA as 'DC' 00:05:26.016 EAL: Buses did not request a specific IOVA mode. 00:05:26.016 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:26.016 EAL: Selected IOVA mode 'VA' 00:05:26.016 EAL: Probing VFIO support... 00:05:26.016 EAL: IOMMU type 1 (Type 1) is supported 00:05:26.016 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:26.016 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:26.016 EAL: VFIO support initialized 00:05:26.016 EAL: Ask a virtual area of 0x2e000 bytes 00:05:26.016 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:26.016 EAL: Setting up physically contiguous memory... 00:05:26.016 EAL: Setting maximum number of open files to 524288 00:05:26.016 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:26.016 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:26.016 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:26.016 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:26.016 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.016 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:26.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.016 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.016 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:26.016 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:26.016 EAL: Hugepages will be freed exactly as allocated. 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: TSC frequency is ~2500000 KHz 00:05:26.016 EAL: Main lcore 0 is ready (tid=7f61a49dea00;cpuset=[0]) 00:05:26.016 EAL: Trying to obtain current memory policy. 00:05:26.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.016 EAL: Restoring previous memory policy: 0 00:05:26.016 EAL: request: mp_malloc_sync 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: Heap on socket 0 was expanded by 2MB 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: Mem event callback 'spdk:(nil)' registered 00:05:26.016 00:05:26.016 00:05:26.016 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.016 http://cunit.sourceforge.net/ 00:05:26.016 00:05:26.016 00:05:26.016 Suite: components_suite 00:05:26.016 Test: vtophys_malloc_test ...passed 00:05:26.016 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:26.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.016 EAL: Restoring previous memory policy: 4 00:05:26.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.016 EAL: request: mp_malloc_sync 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: Heap on socket 0 was expanded by 4MB 00:05:26.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.016 EAL: request: mp_malloc_sync 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: Heap on socket 0 was shrunk by 4MB 00:05:26.016 EAL: Trying to obtain current memory policy. 00:05:26.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.016 EAL: Restoring previous memory policy: 4 00:05:26.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.016 EAL: request: mp_malloc_sync 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: Heap on socket 0 was expanded by 6MB 00:05:26.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.016 EAL: request: mp_malloc_sync 00:05:26.016 EAL: No shared files mode enabled, IPC is disabled 00:05:26.016 EAL: Heap on socket 0 was shrunk by 6MB 00:05:26.016 EAL: Trying to obtain current memory policy. 00:05:26.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.274 EAL: Restoring previous memory policy: 4 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was expanded by 10MB 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was shrunk by 10MB 00:05:26.274 EAL: Trying to obtain current memory policy. 00:05:26.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.274 EAL: Restoring previous memory policy: 4 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was expanded by 18MB 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was shrunk by 18MB 00:05:26.274 EAL: Trying to obtain current memory policy. 00:05:26.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.274 EAL: Restoring previous memory policy: 4 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was expanded by 34MB 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was shrunk by 34MB 00:05:26.274 EAL: Trying to obtain current memory policy. 00:05:26.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.274 EAL: Restoring previous memory policy: 4 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was expanded by 66MB 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was shrunk by 66MB 00:05:26.274 EAL: Trying to obtain current memory policy. 00:05:26.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.274 EAL: Restoring previous memory policy: 4 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was expanded by 130MB 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was shrunk by 130MB 00:05:26.274 EAL: Trying to obtain current memory policy. 00:05:26.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.274 EAL: Restoring previous memory policy: 4 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was expanded by 258MB 00:05:26.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.274 EAL: request: mp_malloc_sync 00:05:26.274 EAL: No shared files mode enabled, IPC is disabled 00:05:26.274 EAL: Heap on socket 0 was shrunk by 258MB 00:05:26.274 EAL: Trying to obtain current memory policy. 00:05:26.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.531 EAL: Restoring previous memory policy: 4 00:05:26.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.531 EAL: request: mp_malloc_sync 00:05:26.531 EAL: No shared files mode enabled, IPC is disabled 00:05:26.531 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.531 EAL: request: mp_malloc_sync 00:05:26.531 EAL: No shared files mode enabled, IPC is disabled 00:05:26.531 EAL: Heap on socket 0 was shrunk by 514MB 00:05:26.531 EAL: Trying to obtain current memory policy. 00:05:26.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.789 EAL: Restoring previous memory policy: 4 00:05:26.789 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.789 EAL: request: mp_malloc_sync 00:05:26.789 EAL: No shared files mode enabled, IPC is disabled 00:05:26.789 EAL: Heap on socket 0 was expanded by 1026MB 00:05:27.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.047 EAL: request: mp_malloc_sync 00:05:27.047 EAL: No shared files mode enabled, IPC is disabled 00:05:27.047 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:27.047 passed 00:05:27.047 00:05:27.047 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.047 suites 1 1 n/a 0 0 00:05:27.047 tests 2 2 2 0 0 00:05:27.047 asserts 497 497 497 0 n/a 00:05:27.047 00:05:27.047 Elapsed time = 0.961 seconds 00:05:27.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.047 EAL: request: mp_malloc_sync 00:05:27.047 EAL: No shared files mode enabled, IPC is disabled 00:05:27.047 EAL: Heap on socket 0 was shrunk by 2MB 00:05:27.047 EAL: No shared files mode enabled, IPC is disabled 00:05:27.047 EAL: No shared files mode enabled, IPC is disabled 00:05:27.047 EAL: No shared files mode enabled, IPC is disabled 00:05:27.047 00:05:27.047 real 0m1.085s 00:05:27.047 user 0m0.634s 00:05:27.047 sys 0m0.426s 00:05:27.047 20:07:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.047 20:07:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:27.047 ************************************ 00:05:27.047 END TEST env_vtophys 00:05:27.047 ************************************ 00:05:27.047 20:07:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:27.047 20:07:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.047 20:07:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.047 20:07:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.305 ************************************ 00:05:27.305 START TEST env_pci 00:05:27.305 ************************************ 00:05:27.305 20:07:40 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:27.305 00:05:27.305 00:05:27.305 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.305 http://cunit.sourceforge.net/ 00:05:27.305 00:05:27.305 00:05:27.305 Suite: pci 00:05:27.305 Test: pci_hook ...[2024-11-26 20:07:40.032394] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1592030 has claimed it 00:05:27.305 EAL: Cannot find device (10000:00:01.0) 00:05:27.305 EAL: Failed to attach device on primary process 00:05:27.305 passed 00:05:27.305 00:05:27.305 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.305 suites 1 1 n/a 0 0 00:05:27.305 tests 1 1 1 0 0 00:05:27.305 asserts 25 25 25 0 n/a 00:05:27.305 00:05:27.305 Elapsed time = 0.037 seconds 00:05:27.305 00:05:27.305 real 0m0.057s 00:05:27.305 user 0m0.016s 00:05:27.305 sys 0m0.041s 00:05:27.305 20:07:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.305 20:07:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:27.305 ************************************ 00:05:27.305 END TEST env_pci 00:05:27.305 ************************************ 00:05:27.305 20:07:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.305 20:07:40 env -- env/env.sh@15 -- # uname 00:05:27.305 20:07:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.305 20:07:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.305 20:07:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.305 20:07:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:27.305 20:07:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.305 20:07:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.305 ************************************ 00:05:27.305 START TEST env_dpdk_post_init 00:05:27.305 ************************************ 00:05:27.305 20:07:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.305 EAL: Detected CPU lcores: 112 00:05:27.305 EAL: Detected NUMA nodes: 2 00:05:27.305 EAL: Detected static linkage of DPDK 00:05:27.305 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.305 EAL: Selected IOVA mode 'VA' 00:05:27.305 EAL: VFIO support initialized 00:05:27.305 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.563 EAL: Using IOMMU type 1 (Type 1) 00:05:28.130 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:32.312 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:32.312 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:32.312 Starting DPDK initialization... 00:05:32.312 Starting SPDK post initialization... 00:05:32.312 SPDK NVMe probe 00:05:32.312 Attaching to 0000:d8:00.0 00:05:32.312 Attached to 0000:d8:00.0 00:05:32.312 Cleaning up... 00:05:32.312 00:05:32.312 real 0m4.734s 00:05:32.312 user 0m3.328s 00:05:32.312 sys 0m0.652s 00:05:32.312 20:07:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.312 20:07:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.312 ************************************ 00:05:32.312 END TEST env_dpdk_post_init 00:05:32.312 ************************************ 00:05:32.312 20:07:44 env -- env/env.sh@26 -- # uname 00:05:32.312 20:07:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:32.312 20:07:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.312 20:07:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.312 20:07:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.312 20:07:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.312 ************************************ 00:05:32.312 START TEST env_mem_callbacks 00:05:32.312 ************************************ 00:05:32.312 20:07:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.312 EAL: Detected CPU lcores: 112 00:05:32.312 EAL: Detected NUMA nodes: 2 00:05:32.312 EAL: Detected static linkage of DPDK 00:05:32.312 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.312 EAL: Selected IOVA mode 'VA' 00:05:32.312 EAL: VFIO support initialized 00:05:32.312 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.312 00:05:32.312 00:05:32.312 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.312 http://cunit.sourceforge.net/ 00:05:32.312 00:05:32.312 00:05:32.312 Suite: memory 00:05:32.312 Test: test ... 00:05:32.312 register 0x200000200000 2097152 00:05:32.312 malloc 3145728 00:05:32.312 register 0x200000400000 4194304 00:05:32.312 buf 0x200000500000 len 3145728 PASSED 00:05:32.312 malloc 64 00:05:32.312 buf 0x2000004fff40 len 64 PASSED 00:05:32.312 malloc 4194304 00:05:32.312 register 0x200000800000 6291456 00:05:32.312 buf 0x200000a00000 len 4194304 PASSED 00:05:32.312 free 0x200000500000 3145728 00:05:32.312 free 0x2000004fff40 64 00:05:32.312 unregister 0x200000400000 4194304 PASSED 00:05:32.312 free 0x200000a00000 4194304 00:05:32.312 unregister 0x200000800000 6291456 PASSED 00:05:32.312 malloc 8388608 00:05:32.312 register 0x200000400000 10485760 00:05:32.312 buf 0x200000600000 len 8388608 PASSED 00:05:32.312 free 0x200000600000 8388608 00:05:32.312 unregister 0x200000400000 10485760 PASSED 00:05:32.312 passed 00:05:32.312 00:05:32.312 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.312 suites 1 1 n/a 0 0 00:05:32.312 tests 1 1 1 0 0 00:05:32.312 asserts 15 15 15 0 n/a 00:05:32.312 00:05:32.312 Elapsed time = 0.005 seconds 00:05:32.312 00:05:32.312 real 0m0.050s 00:05:32.312 user 0m0.012s 00:05:32.312 sys 0m0.038s 00:05:32.312 20:07:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.312 20:07:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:32.312 ************************************ 00:05:32.312 END TEST env_mem_callbacks 00:05:32.312 ************************************ 00:05:32.312 00:05:32.312 real 0m6.564s 00:05:32.312 user 0m4.319s 00:05:32.312 sys 0m1.503s 00:05:32.312 20:07:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.312 20:07:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.312 ************************************ 00:05:32.312 END TEST env 00:05:32.312 ************************************ 00:05:32.312 20:07:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.312 20:07:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.312 20:07:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.312 20:07:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.312 ************************************ 00:05:32.312 START TEST rpc 00:05:32.312 ************************************ 00:05:32.312 20:07:45 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.312 * Looking for test storage... 00:05:32.312 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:32.312 20:07:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.312 20:07:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.312 20:07:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.571 20:07:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.571 20:07:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.571 20:07:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.571 20:07:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.571 20:07:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.571 20:07:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.571 20:07:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.571 20:07:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.571 20:07:45 rpc -- scripts/common.sh@345 -- # : 1 00:05:32.571 20:07:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.571 20:07:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.571 20:07:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.571 20:07:45 rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.571 20:07:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.571 20:07:45 rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.571 20:07:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.571 20:07:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.571 20:07:45 rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.571 20:07:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.571 20:07:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.571 20:07:45 rpc -- scripts/common.sh@368 -- # return 0 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.571 --rc genhtml_branch_coverage=1 00:05:32.571 --rc genhtml_function_coverage=1 00:05:32.571 --rc genhtml_legend=1 00:05:32.571 --rc geninfo_all_blocks=1 00:05:32.571 --rc geninfo_unexecuted_blocks=1 00:05:32.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.571 ' 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.571 --rc genhtml_branch_coverage=1 00:05:32.571 --rc genhtml_function_coverage=1 00:05:32.571 --rc genhtml_legend=1 00:05:32.571 --rc geninfo_all_blocks=1 00:05:32.571 --rc geninfo_unexecuted_blocks=1 00:05:32.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.571 ' 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.571 --rc genhtml_branch_coverage=1 00:05:32.571 --rc genhtml_function_coverage=1 00:05:32.571 --rc genhtml_legend=1 00:05:32.571 --rc geninfo_all_blocks=1 00:05:32.571 --rc geninfo_unexecuted_blocks=1 00:05:32.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.571 ' 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.571 --rc genhtml_branch_coverage=1 00:05:32.571 --rc genhtml_function_coverage=1 00:05:32.571 --rc genhtml_legend=1 00:05:32.571 --rc geninfo_all_blocks=1 00:05:32.571 --rc geninfo_unexecuted_blocks=1 00:05:32.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:32.571 ' 00:05:32.571 20:07:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1593191 00:05:32.571 20:07:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:32.571 20:07:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.571 20:07:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1593191 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 1593191 ']' 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.571 20:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.571 [2024-11-26 20:07:45.298219] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:32.571 [2024-11-26 20:07:45.298294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593191 ] 00:05:32.571 [2024-11-26 20:07:45.368179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.571 [2024-11-26 20:07:45.406632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.571 [2024-11-26 20:07:45.406668] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1593191' to capture a snapshot of events at runtime. 00:05:32.571 [2024-11-26 20:07:45.406677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.571 [2024-11-26 20:07:45.406684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.571 [2024-11-26 20:07:45.406691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1593191 for offline analysis/debug. 00:05:32.571 [2024-11-26 20:07:45.407290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.829 20:07:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.829 20:07:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.829 20:07:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:32.829 20:07:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:32.829 20:07:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.829 20:07:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.829 20:07:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.829 20:07:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.829 20:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.829 ************************************ 00:05:32.829 START TEST rpc_integrity 00:05:32.829 ************************************ 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.829 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.829 { 00:05:32.829 "name": "Malloc0", 00:05:32.829 "aliases": [ 00:05:32.829 "2728d13d-e118-4b0d-90b6-3ff4248ac3d6" 00:05:32.829 ], 00:05:32.829 "product_name": "Malloc disk", 00:05:32.829 "block_size": 512, 00:05:32.829 "num_blocks": 16384, 00:05:32.829 "uuid": "2728d13d-e118-4b0d-90b6-3ff4248ac3d6", 00:05:32.829 "assigned_rate_limits": { 00:05:32.829 "rw_ios_per_sec": 0, 00:05:32.829 "rw_mbytes_per_sec": 0, 00:05:32.829 "r_mbytes_per_sec": 0, 00:05:32.829 "w_mbytes_per_sec": 0 00:05:32.829 }, 00:05:32.829 "claimed": false, 00:05:32.829 "zoned": false, 00:05:32.829 "supported_io_types": { 00:05:32.829 "read": true, 00:05:32.829 "write": true, 00:05:32.829 "unmap": true, 00:05:32.829 "flush": true, 00:05:32.829 "reset": true, 00:05:32.829 "nvme_admin": false, 00:05:32.829 "nvme_io": false, 00:05:32.829 "nvme_io_md": false, 00:05:32.829 "write_zeroes": true, 00:05:32.829 "zcopy": true, 00:05:32.829 "get_zone_info": false, 00:05:32.829 "zone_management": false, 00:05:32.829 "zone_append": false, 00:05:32.829 "compare": false, 00:05:32.829 "compare_and_write": false, 00:05:32.829 "abort": true, 00:05:32.829 "seek_hole": false, 00:05:32.829 "seek_data": false, 00:05:32.829 "copy": true, 00:05:32.829 "nvme_iov_md": false 00:05:32.829 }, 00:05:32.829 "memory_domains": [ 00:05:32.829 { 00:05:32.829 "dma_device_id": "system", 00:05:32.829 "dma_device_type": 1 00:05:32.829 }, 00:05:32.829 { 00:05:32.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.829 "dma_device_type": 2 00:05:32.829 } 00:05:32.829 ], 00:05:32.829 "driver_specific": {} 00:05:32.829 } 00:05:32.829 ]' 00:05:32.829 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.086 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.086 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.086 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.086 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.086 [2024-11-26 20:07:45.774761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.086 [2024-11-26 20:07:45.774794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.086 [2024-11-26 20:07:45.774813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5998280 00:05:33.086 [2024-11-26 20:07:45.774823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.086 [2024-11-26 20:07:45.775760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.086 [2024-11-26 20:07:45.775783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.086 Passthru0 00:05:33.086 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.086 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.086 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.086 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.086 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.086 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.086 { 00:05:33.086 "name": "Malloc0", 00:05:33.086 "aliases": [ 00:05:33.086 "2728d13d-e118-4b0d-90b6-3ff4248ac3d6" 00:05:33.087 ], 00:05:33.087 "product_name": "Malloc disk", 00:05:33.087 "block_size": 512, 00:05:33.087 "num_blocks": 16384, 00:05:33.087 "uuid": "2728d13d-e118-4b0d-90b6-3ff4248ac3d6", 00:05:33.087 "assigned_rate_limits": { 00:05:33.087 "rw_ios_per_sec": 0, 00:05:33.087 "rw_mbytes_per_sec": 0, 00:05:33.087 "r_mbytes_per_sec": 0, 00:05:33.087 "w_mbytes_per_sec": 0 00:05:33.087 }, 00:05:33.087 "claimed": true, 00:05:33.087 "claim_type": "exclusive_write", 00:05:33.087 "zoned": false, 00:05:33.087 "supported_io_types": { 00:05:33.087 "read": true, 00:05:33.087 "write": true, 00:05:33.087 "unmap": true, 00:05:33.087 "flush": true, 00:05:33.087 "reset": true, 00:05:33.087 "nvme_admin": false, 00:05:33.087 "nvme_io": false, 00:05:33.087 "nvme_io_md": false, 00:05:33.087 "write_zeroes": true, 00:05:33.087 "zcopy": true, 00:05:33.087 "get_zone_info": false, 00:05:33.087 "zone_management": false, 00:05:33.087 "zone_append": false, 00:05:33.087 "compare": false, 00:05:33.087 "compare_and_write": false, 00:05:33.087 "abort": true, 00:05:33.087 "seek_hole": false, 00:05:33.087 "seek_data": false, 00:05:33.087 "copy": true, 00:05:33.087 "nvme_iov_md": false 00:05:33.087 }, 00:05:33.087 "memory_domains": [ 00:05:33.087 { 00:05:33.087 "dma_device_id": "system", 00:05:33.087 "dma_device_type": 1 00:05:33.087 }, 00:05:33.087 { 00:05:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.087 "dma_device_type": 2 00:05:33.087 } 00:05:33.087 ], 00:05:33.087 "driver_specific": {} 00:05:33.087 }, 00:05:33.087 { 00:05:33.087 "name": "Passthru0", 00:05:33.087 "aliases": [ 00:05:33.087 "e9c3d8b4-332f-5af4-8ff7-92ff9948fb89" 00:05:33.087 ], 00:05:33.087 "product_name": "passthru", 00:05:33.087 "block_size": 512, 00:05:33.087 "num_blocks": 16384, 00:05:33.087 "uuid": "e9c3d8b4-332f-5af4-8ff7-92ff9948fb89", 00:05:33.087 "assigned_rate_limits": { 00:05:33.087 "rw_ios_per_sec": 0, 00:05:33.087 "rw_mbytes_per_sec": 0, 00:05:33.087 "r_mbytes_per_sec": 0, 00:05:33.087 "w_mbytes_per_sec": 0 00:05:33.087 }, 00:05:33.087 "claimed": false, 00:05:33.087 "zoned": false, 00:05:33.087 "supported_io_types": { 00:05:33.087 "read": true, 00:05:33.087 "write": true, 00:05:33.087 "unmap": true, 00:05:33.087 "flush": true, 00:05:33.087 "reset": true, 00:05:33.087 "nvme_admin": false, 00:05:33.087 "nvme_io": false, 00:05:33.087 "nvme_io_md": false, 00:05:33.087 "write_zeroes": true, 00:05:33.087 "zcopy": true, 00:05:33.087 "get_zone_info": false, 00:05:33.087 "zone_management": false, 00:05:33.087 "zone_append": false, 00:05:33.087 "compare": false, 00:05:33.087 "compare_and_write": false, 00:05:33.087 "abort": true, 00:05:33.087 "seek_hole": false, 00:05:33.087 "seek_data": false, 00:05:33.087 "copy": true, 00:05:33.087 "nvme_iov_md": false 00:05:33.087 }, 00:05:33.087 "memory_domains": [ 00:05:33.087 { 00:05:33.087 "dma_device_id": "system", 00:05:33.087 "dma_device_type": 1 00:05:33.087 }, 00:05:33.087 { 00:05:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.087 "dma_device_type": 2 00:05:33.087 } 00:05:33.087 ], 00:05:33.087 "driver_specific": { 00:05:33.087 "passthru": { 00:05:33.087 "name": "Passthru0", 00:05:33.087 "base_bdev_name": "Malloc0" 00:05:33.087 } 00:05:33.087 } 00:05:33.087 } 00:05:33.087 ]' 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.087 20:07:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.087 00:05:33.087 real 0m0.269s 00:05:33.087 user 0m0.161s 00:05:33.087 sys 0m0.046s 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.087 20:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.087 ************************************ 00:05:33.087 END TEST rpc_integrity 00:05:33.087 ************************************ 00:05:33.087 20:07:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:33.087 20:07:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.087 20:07:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.087 20:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.344 ************************************ 00:05:33.344 START TEST rpc_plugins 00:05:33.344 ************************************ 00:05:33.344 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:33.344 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:33.344 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.344 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.344 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.344 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.344 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.344 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.344 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.344 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.344 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.344 { 00:05:33.344 "name": "Malloc1", 00:05:33.344 "aliases": [ 00:05:33.344 "5e20d7cb-a534-4271-b288-dbad9dd6d2ff" 00:05:33.344 ], 00:05:33.344 "product_name": "Malloc disk", 00:05:33.344 "block_size": 4096, 00:05:33.344 "num_blocks": 256, 00:05:33.344 "uuid": "5e20d7cb-a534-4271-b288-dbad9dd6d2ff", 00:05:33.344 "assigned_rate_limits": { 00:05:33.344 "rw_ios_per_sec": 0, 00:05:33.344 "rw_mbytes_per_sec": 0, 00:05:33.344 "r_mbytes_per_sec": 0, 00:05:33.344 "w_mbytes_per_sec": 0 00:05:33.344 }, 00:05:33.344 "claimed": false, 00:05:33.344 "zoned": false, 00:05:33.344 "supported_io_types": { 00:05:33.344 "read": true, 00:05:33.344 "write": true, 00:05:33.344 "unmap": true, 00:05:33.345 "flush": true, 00:05:33.345 "reset": true, 00:05:33.345 "nvme_admin": false, 00:05:33.345 "nvme_io": false, 00:05:33.345 "nvme_io_md": false, 00:05:33.345 "write_zeroes": true, 00:05:33.345 "zcopy": true, 00:05:33.345 "get_zone_info": false, 00:05:33.345 "zone_management": false, 00:05:33.345 "zone_append": false, 00:05:33.345 "compare": false, 00:05:33.345 "compare_and_write": false, 00:05:33.345 "abort": true, 00:05:33.345 "seek_hole": false, 00:05:33.345 "seek_data": false, 00:05:33.345 "copy": true, 00:05:33.345 "nvme_iov_md": false 00:05:33.345 }, 00:05:33.345 "memory_domains": [ 00:05:33.345 { 00:05:33.345 "dma_device_id": "system", 00:05:33.345 "dma_device_type": 1 00:05:33.345 }, 00:05:33.345 { 00:05:33.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.345 "dma_device_type": 2 00:05:33.345 } 00:05:33.345 ], 00:05:33.345 "driver_specific": {} 00:05:33.345 } 00:05:33.345 ]' 00:05:33.345 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:33.345 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.345 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.345 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.345 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.345 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:33.345 20:07:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.345 00:05:33.345 real 0m0.132s 00:05:33.345 user 0m0.069s 00:05:33.345 sys 0m0.027s 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.345 20:07:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 ************************************ 00:05:33.345 END TEST rpc_plugins 00:05:33.345 ************************************ 00:05:33.345 20:07:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.345 20:07:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.345 20:07:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.345 20:07:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 ************************************ 00:05:33.345 START TEST rpc_trace_cmd_test 00:05:33.345 ************************************ 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:33.345 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1593191", 00:05:33.345 "tpoint_group_mask": "0x8", 00:05:33.345 "iscsi_conn": { 00:05:33.345 "mask": "0x2", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "scsi": { 00:05:33.345 "mask": "0x4", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "bdev": { 00:05:33.345 "mask": "0x8", 00:05:33.345 "tpoint_mask": "0xffffffffffffffff" 00:05:33.345 }, 00:05:33.345 "nvmf_rdma": { 00:05:33.345 "mask": "0x10", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "nvmf_tcp": { 00:05:33.345 "mask": "0x20", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "ftl": { 00:05:33.345 "mask": "0x40", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "blobfs": { 00:05:33.345 "mask": "0x80", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "dsa": { 00:05:33.345 "mask": "0x200", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "thread": { 00:05:33.345 "mask": "0x400", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "nvme_pcie": { 00:05:33.345 "mask": "0x800", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "iaa": { 00:05:33.345 "mask": "0x1000", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "nvme_tcp": { 00:05:33.345 "mask": "0x2000", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "bdev_nvme": { 00:05:33.345 "mask": "0x4000", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "sock": { 00:05:33.345 "mask": "0x8000", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "blob": { 00:05:33.345 "mask": "0x10000", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "bdev_raid": { 00:05:33.345 "mask": "0x20000", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 }, 00:05:33.345 "scheduler": { 00:05:33.345 "mask": "0x40000", 00:05:33.345 "tpoint_mask": "0x0" 00:05:33.345 } 00:05:33.345 }' 00:05:33.345 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.602 00:05:33.602 real 0m0.217s 00:05:33.602 user 0m0.177s 00:05:33.602 sys 0m0.033s 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.602 20:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.602 ************************************ 00:05:33.602 END TEST rpc_trace_cmd_test 00:05:33.602 ************************************ 00:05:33.602 20:07:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.602 20:07:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.602 20:07:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.602 20:07:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.602 20:07:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.602 20:07:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.602 ************************************ 00:05:33.602 START TEST rpc_daemon_integrity 00:05:33.602 ************************************ 00:05:33.602 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:33.602 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.602 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.602 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.602 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.602 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.602 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.859 { 00:05:33.859 "name": "Malloc2", 00:05:33.859 "aliases": [ 00:05:33.859 "9a0895b7-45f2-4688-9991-ac92a178ab81" 00:05:33.859 ], 00:05:33.859 "product_name": "Malloc disk", 00:05:33.859 "block_size": 512, 00:05:33.859 "num_blocks": 16384, 00:05:33.859 "uuid": "9a0895b7-45f2-4688-9991-ac92a178ab81", 00:05:33.859 "assigned_rate_limits": { 00:05:33.859 "rw_ios_per_sec": 0, 00:05:33.859 "rw_mbytes_per_sec": 0, 00:05:33.859 "r_mbytes_per_sec": 0, 00:05:33.859 "w_mbytes_per_sec": 0 00:05:33.859 }, 00:05:33.859 "claimed": false, 00:05:33.859 "zoned": false, 00:05:33.859 "supported_io_types": { 00:05:33.859 "read": true, 00:05:33.859 "write": true, 00:05:33.859 "unmap": true, 00:05:33.859 "flush": true, 00:05:33.859 "reset": true, 00:05:33.859 "nvme_admin": false, 00:05:33.859 "nvme_io": false, 00:05:33.859 "nvme_io_md": false, 00:05:33.859 "write_zeroes": true, 00:05:33.859 "zcopy": true, 00:05:33.859 "get_zone_info": false, 00:05:33.859 "zone_management": false, 00:05:33.859 "zone_append": false, 00:05:33.859 "compare": false, 00:05:33.859 "compare_and_write": false, 00:05:33.859 "abort": true, 00:05:33.859 "seek_hole": false, 00:05:33.859 "seek_data": false, 00:05:33.859 "copy": true, 00:05:33.859 "nvme_iov_md": false 00:05:33.859 }, 00:05:33.859 "memory_domains": [ 00:05:33.859 { 00:05:33.859 "dma_device_id": "system", 00:05:33.859 "dma_device_type": 1 00:05:33.859 }, 00:05:33.859 { 00:05:33.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.859 "dma_device_type": 2 00:05:33.859 } 00:05:33.859 ], 00:05:33.859 "driver_specific": {} 00:05:33.859 } 00:05:33.859 ]' 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 [2024-11-26 20:07:46.645003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.859 [2024-11-26 20:07:46.645036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.859 [2024-11-26 20:07:46.645060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x598d8b0 00:05:33.859 [2024-11-26 20:07:46.645070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.859 [2024-11-26 20:07:46.645809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.859 [2024-11-26 20:07:46.645831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.859 Passthru0 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.859 { 00:05:33.859 "name": "Malloc2", 00:05:33.859 "aliases": [ 00:05:33.859 "9a0895b7-45f2-4688-9991-ac92a178ab81" 00:05:33.859 ], 00:05:33.859 "product_name": "Malloc disk", 00:05:33.859 "block_size": 512, 00:05:33.859 "num_blocks": 16384, 00:05:33.859 "uuid": "9a0895b7-45f2-4688-9991-ac92a178ab81", 00:05:33.859 "assigned_rate_limits": { 00:05:33.859 "rw_ios_per_sec": 0, 00:05:33.859 "rw_mbytes_per_sec": 0, 00:05:33.859 "r_mbytes_per_sec": 0, 00:05:33.859 "w_mbytes_per_sec": 0 00:05:33.859 }, 00:05:33.859 "claimed": true, 00:05:33.859 "claim_type": "exclusive_write", 00:05:33.859 "zoned": false, 00:05:33.859 "supported_io_types": { 00:05:33.859 "read": true, 00:05:33.859 "write": true, 00:05:33.859 "unmap": true, 00:05:33.859 "flush": true, 00:05:33.859 "reset": true, 00:05:33.859 "nvme_admin": false, 00:05:33.859 "nvme_io": false, 00:05:33.859 "nvme_io_md": false, 00:05:33.859 "write_zeroes": true, 00:05:33.859 "zcopy": true, 00:05:33.859 "get_zone_info": false, 00:05:33.859 "zone_management": false, 00:05:33.859 "zone_append": false, 00:05:33.859 "compare": false, 00:05:33.859 "compare_and_write": false, 00:05:33.859 "abort": true, 00:05:33.859 "seek_hole": false, 00:05:33.859 "seek_data": false, 00:05:33.859 "copy": true, 00:05:33.859 "nvme_iov_md": false 00:05:33.859 }, 00:05:33.859 "memory_domains": [ 00:05:33.859 { 00:05:33.859 "dma_device_id": "system", 00:05:33.859 "dma_device_type": 1 00:05:33.859 }, 00:05:33.859 { 00:05:33.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.859 "dma_device_type": 2 00:05:33.859 } 00:05:33.859 ], 00:05:33.859 "driver_specific": {} 00:05:33.859 }, 00:05:33.859 { 00:05:33.859 "name": "Passthru0", 00:05:33.859 "aliases": [ 00:05:33.859 "92a4c25e-f741-55ce-a2d5-d4fcd3743466" 00:05:33.859 ], 00:05:33.859 "product_name": "passthru", 00:05:33.859 "block_size": 512, 00:05:33.859 "num_blocks": 16384, 00:05:33.859 "uuid": "92a4c25e-f741-55ce-a2d5-d4fcd3743466", 00:05:33.859 "assigned_rate_limits": { 00:05:33.859 "rw_ios_per_sec": 0, 00:05:33.859 "rw_mbytes_per_sec": 0, 00:05:33.859 "r_mbytes_per_sec": 0, 00:05:33.859 "w_mbytes_per_sec": 0 00:05:33.859 }, 00:05:33.859 "claimed": false, 00:05:33.859 "zoned": false, 00:05:33.859 "supported_io_types": { 00:05:33.859 "read": true, 00:05:33.859 "write": true, 00:05:33.859 "unmap": true, 00:05:33.859 "flush": true, 00:05:33.859 "reset": true, 00:05:33.859 "nvme_admin": false, 00:05:33.859 "nvme_io": false, 00:05:33.859 "nvme_io_md": false, 00:05:33.859 "write_zeroes": true, 00:05:33.859 "zcopy": true, 00:05:33.859 "get_zone_info": false, 00:05:33.859 "zone_management": false, 00:05:33.859 "zone_append": false, 00:05:33.859 "compare": false, 00:05:33.859 "compare_and_write": false, 00:05:33.859 "abort": true, 00:05:33.859 "seek_hole": false, 00:05:33.859 "seek_data": false, 00:05:33.859 "copy": true, 00:05:33.859 "nvme_iov_md": false 00:05:33.859 }, 00:05:33.859 "memory_domains": [ 00:05:33.859 { 00:05:33.859 "dma_device_id": "system", 00:05:33.859 "dma_device_type": 1 00:05:33.859 }, 00:05:33.859 { 00:05:33.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.859 "dma_device_type": 2 00:05:33.859 } 00:05:33.859 ], 00:05:33.859 "driver_specific": { 00:05:33.859 "passthru": { 00:05:33.859 "name": "Passthru0", 00:05:33.859 "base_bdev_name": "Malloc2" 00:05:33.859 } 00:05:33.859 } 00:05:33.859 } 00:05:33.859 ]' 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.859 00:05:33.859 real 0m0.276s 00:05:33.859 user 0m0.160s 00:05:33.859 sys 0m0.058s 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.859 20:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.859 ************************************ 00:05:33.859 END TEST rpc_daemon_integrity 00:05:33.859 ************************************ 00:05:34.117 20:07:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:34.117 20:07:46 rpc -- rpc/rpc.sh@84 -- # killprocess 1593191 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 1593191 ']' 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@958 -- # kill -0 1593191 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@959 -- # uname 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593191 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593191' 00:05:34.117 killing process with pid 1593191 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@973 -- # kill 1593191 00:05:34.117 20:07:46 rpc -- common/autotest_common.sh@978 -- # wait 1593191 00:05:34.375 00:05:34.375 real 0m2.073s 00:05:34.375 user 0m2.609s 00:05:34.375 sys 0m0.768s 00:05:34.375 20:07:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.375 20:07:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.375 ************************************ 00:05:34.375 END TEST rpc 00:05:34.375 ************************************ 00:05:34.375 20:07:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.375 20:07:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.375 20:07:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.375 20:07:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.375 ************************************ 00:05:34.375 START TEST skip_rpc 00:05:34.375 ************************************ 00:05:34.375 20:07:47 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.634 * Looking for test storage... 00:05:34.634 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.634 20:07:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.634 --rc genhtml_branch_coverage=1 00:05:34.634 --rc genhtml_function_coverage=1 00:05:34.634 --rc genhtml_legend=1 00:05:34.634 --rc geninfo_all_blocks=1 00:05:34.634 --rc geninfo_unexecuted_blocks=1 00:05:34.634 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.634 ' 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.634 --rc genhtml_branch_coverage=1 00:05:34.634 --rc genhtml_function_coverage=1 00:05:34.634 --rc genhtml_legend=1 00:05:34.634 --rc geninfo_all_blocks=1 00:05:34.634 --rc geninfo_unexecuted_blocks=1 00:05:34.634 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.634 ' 00:05:34.634 20:07:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.634 --rc genhtml_branch_coverage=1 00:05:34.634 --rc genhtml_function_coverage=1 00:05:34.634 --rc genhtml_legend=1 00:05:34.634 --rc geninfo_all_blocks=1 00:05:34.634 --rc geninfo_unexecuted_blocks=1 00:05:34.634 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.634 ' 00:05:34.635 20:07:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.635 --rc genhtml_branch_coverage=1 00:05:34.635 --rc genhtml_function_coverage=1 00:05:34.635 --rc genhtml_legend=1 00:05:34.635 --rc geninfo_all_blocks=1 00:05:34.635 --rc geninfo_unexecuted_blocks=1 00:05:34.635 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.635 ' 00:05:34.635 20:07:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:34.635 20:07:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:34.635 20:07:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.635 20:07:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.635 20:07:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.635 20:07:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.635 ************************************ 00:05:34.635 START TEST skip_rpc 00:05:34.635 ************************************ 00:05:34.635 20:07:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:34.635 20:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1593651 00:05:34.635 20:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.635 20:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.635 20:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.635 [2024-11-26 20:07:47.525607] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:34.635 [2024-11-26 20:07:47.525686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593651 ] 00:05:34.893 [2024-11-26 20:07:47.595249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.893 [2024-11-26 20:07:47.634468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1593651 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1593651 ']' 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1593651 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593651 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593651' 00:05:40.152 killing process with pid 1593651 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1593651 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1593651 00:05:40.152 00:05:40.152 real 0m5.377s 00:05:40.152 user 0m5.139s 00:05:40.152 sys 0m0.281s 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.152 20:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.152 ************************************ 00:05:40.152 END TEST skip_rpc 00:05:40.152 ************************************ 00:05:40.152 20:07:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.152 20:07:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.152 20:07:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.152 20:07:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.152 ************************************ 00:05:40.152 START TEST skip_rpc_with_json 00:05:40.152 ************************************ 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1594734 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1594734 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1594734 ']' 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.152 20:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.152 [2024-11-26 20:07:52.985284] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:40.152 [2024-11-26 20:07:52.985341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594734 ] 00:05:40.153 [2024-11-26 20:07:53.056089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.411 [2024-11-26 20:07:53.099614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.411 [2024-11-26 20:07:53.309998] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.411 request: 00:05:40.411 { 00:05:40.411 "trtype": "tcp", 00:05:40.411 "method": "nvmf_get_transports", 00:05:40.411 "req_id": 1 00:05:40.411 } 00:05:40.411 Got JSON-RPC error response 00:05:40.411 response: 00:05:40.411 { 00:05:40.411 "code": -19, 00:05:40.411 "message": "No such device" 00:05:40.411 } 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.411 [2024-11-26 20:07:53.322095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.411 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.669 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.669 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:40.669 { 00:05:40.669 "subsystems": [ 00:05:40.669 { 00:05:40.669 "subsystem": "scheduler", 00:05:40.669 "config": [ 00:05:40.669 { 00:05:40.669 "method": "framework_set_scheduler", 00:05:40.669 "params": { 00:05:40.669 "name": "static" 00:05:40.669 } 00:05:40.669 } 00:05:40.669 ] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "vmd", 00:05:40.669 "config": [] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "sock", 00:05:40.669 "config": [ 00:05:40.669 { 00:05:40.669 "method": "sock_set_default_impl", 00:05:40.669 "params": { 00:05:40.669 "impl_name": "posix" 00:05:40.669 } 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "method": "sock_impl_set_options", 00:05:40.669 "params": { 00:05:40.669 "impl_name": "ssl", 00:05:40.669 "recv_buf_size": 4096, 00:05:40.669 "send_buf_size": 4096, 00:05:40.669 "enable_recv_pipe": true, 00:05:40.669 "enable_quickack": false, 00:05:40.669 "enable_placement_id": 0, 00:05:40.669 "enable_zerocopy_send_server": true, 00:05:40.669 "enable_zerocopy_send_client": false, 00:05:40.669 "zerocopy_threshold": 0, 00:05:40.669 "tls_version": 0, 00:05:40.669 "enable_ktls": false 00:05:40.669 } 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "method": "sock_impl_set_options", 00:05:40.669 "params": { 00:05:40.669 "impl_name": "posix", 00:05:40.669 "recv_buf_size": 2097152, 00:05:40.669 "send_buf_size": 2097152, 00:05:40.669 "enable_recv_pipe": true, 00:05:40.669 "enable_quickack": false, 00:05:40.669 "enable_placement_id": 0, 00:05:40.669 "enable_zerocopy_send_server": true, 00:05:40.669 "enable_zerocopy_send_client": false, 00:05:40.669 "zerocopy_threshold": 0, 00:05:40.669 "tls_version": 0, 00:05:40.669 "enable_ktls": false 00:05:40.669 } 00:05:40.669 } 00:05:40.669 ] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "iobuf", 00:05:40.669 "config": [ 00:05:40.669 { 00:05:40.669 "method": "iobuf_set_options", 00:05:40.669 "params": { 00:05:40.669 "small_pool_count": 8192, 00:05:40.669 "large_pool_count": 1024, 00:05:40.669 "small_bufsize": 8192, 00:05:40.669 "large_bufsize": 135168, 00:05:40.669 "enable_numa": false 00:05:40.669 } 00:05:40.669 } 00:05:40.669 ] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "keyring", 00:05:40.669 "config": [] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "vfio_user_target", 00:05:40.669 "config": null 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "fsdev", 00:05:40.669 "config": [ 00:05:40.669 { 00:05:40.669 "method": "fsdev_set_opts", 00:05:40.669 "params": { 00:05:40.669 "fsdev_io_pool_size": 65535, 00:05:40.669 "fsdev_io_cache_size": 256 00:05:40.669 } 00:05:40.669 } 00:05:40.669 ] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "accel", 00:05:40.669 "config": [ 00:05:40.669 { 00:05:40.669 "method": "accel_set_options", 00:05:40.669 "params": { 00:05:40.669 "small_cache_size": 128, 00:05:40.669 "large_cache_size": 16, 00:05:40.669 "task_count": 2048, 00:05:40.669 "sequence_count": 2048, 00:05:40.669 "buf_count": 2048 00:05:40.669 } 00:05:40.669 } 00:05:40.669 ] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "bdev", 00:05:40.669 "config": [ 00:05:40.669 { 00:05:40.669 "method": "bdev_set_options", 00:05:40.669 "params": { 00:05:40.669 "bdev_io_pool_size": 65535, 00:05:40.669 "bdev_io_cache_size": 256, 00:05:40.669 "bdev_auto_examine": true, 00:05:40.669 "iobuf_small_cache_size": 128, 00:05:40.669 "iobuf_large_cache_size": 16 00:05:40.669 } 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "method": "bdev_raid_set_options", 00:05:40.669 "params": { 00:05:40.669 "process_window_size_kb": 1024, 00:05:40.669 "process_max_bandwidth_mb_sec": 0 00:05:40.669 } 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "method": "bdev_nvme_set_options", 00:05:40.669 "params": { 00:05:40.669 "action_on_timeout": "none", 00:05:40.669 "timeout_us": 0, 00:05:40.669 "timeout_admin_us": 0, 00:05:40.669 "keep_alive_timeout_ms": 10000, 00:05:40.669 "arbitration_burst": 0, 00:05:40.669 "low_priority_weight": 0, 00:05:40.669 "medium_priority_weight": 0, 00:05:40.669 "high_priority_weight": 0, 00:05:40.669 "nvme_adminq_poll_period_us": 10000, 00:05:40.669 "nvme_ioq_poll_period_us": 0, 00:05:40.669 "io_queue_requests": 0, 00:05:40.669 "delay_cmd_submit": true, 00:05:40.669 "transport_retry_count": 4, 00:05:40.669 "bdev_retry_count": 3, 00:05:40.669 "transport_ack_timeout": 0, 00:05:40.669 "ctrlr_loss_timeout_sec": 0, 00:05:40.669 "reconnect_delay_sec": 0, 00:05:40.669 "fast_io_fail_timeout_sec": 0, 00:05:40.669 "disable_auto_failback": false, 00:05:40.669 "generate_uuids": false, 00:05:40.669 "transport_tos": 0, 00:05:40.669 "nvme_error_stat": false, 00:05:40.669 "rdma_srq_size": 0, 00:05:40.669 "io_path_stat": false, 00:05:40.669 "allow_accel_sequence": false, 00:05:40.669 "rdma_max_cq_size": 0, 00:05:40.669 "rdma_cm_event_timeout_ms": 0, 00:05:40.669 "dhchap_digests": [ 00:05:40.669 "sha256", 00:05:40.669 "sha384", 00:05:40.669 "sha512" 00:05:40.669 ], 00:05:40.669 "dhchap_dhgroups": [ 00:05:40.669 "null", 00:05:40.669 "ffdhe2048", 00:05:40.669 "ffdhe3072", 00:05:40.669 "ffdhe4096", 00:05:40.669 "ffdhe6144", 00:05:40.669 "ffdhe8192" 00:05:40.669 ] 00:05:40.669 } 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "method": "bdev_nvme_set_hotplug", 00:05:40.669 "params": { 00:05:40.669 "period_us": 100000, 00:05:40.669 "enable": false 00:05:40.669 } 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "method": "bdev_iscsi_set_options", 00:05:40.669 "params": { 00:05:40.669 "timeout_sec": 30 00:05:40.669 } 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "method": "bdev_wait_for_examine" 00:05:40.669 } 00:05:40.669 ] 00:05:40.669 }, 00:05:40.669 { 00:05:40.669 "subsystem": "nvmf", 00:05:40.669 "config": [ 00:05:40.669 { 00:05:40.669 "method": "nvmf_set_config", 00:05:40.669 "params": { 00:05:40.669 "discovery_filter": "match_any", 00:05:40.669 "admin_cmd_passthru": { 00:05:40.669 "identify_ctrlr": false 00:05:40.669 }, 00:05:40.669 "dhchap_digests": [ 00:05:40.669 "sha256", 00:05:40.669 "sha384", 00:05:40.669 "sha512" 00:05:40.669 ], 00:05:40.670 "dhchap_dhgroups": [ 00:05:40.670 "null", 00:05:40.670 "ffdhe2048", 00:05:40.670 "ffdhe3072", 00:05:40.670 "ffdhe4096", 00:05:40.670 "ffdhe6144", 00:05:40.670 "ffdhe8192" 00:05:40.670 ] 00:05:40.670 } 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "method": "nvmf_set_max_subsystems", 00:05:40.670 "params": { 00:05:40.670 "max_subsystems": 1024 00:05:40.670 } 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "method": "nvmf_set_crdt", 00:05:40.670 "params": { 00:05:40.670 "crdt1": 0, 00:05:40.670 "crdt2": 0, 00:05:40.670 "crdt3": 0 00:05:40.670 } 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "method": "nvmf_create_transport", 00:05:40.670 "params": { 00:05:40.670 "trtype": "TCP", 00:05:40.670 "max_queue_depth": 128, 00:05:40.670 "max_io_qpairs_per_ctrlr": 127, 00:05:40.670 "in_capsule_data_size": 4096, 00:05:40.670 "max_io_size": 131072, 00:05:40.670 "io_unit_size": 131072, 00:05:40.670 "max_aq_depth": 128, 00:05:40.670 "num_shared_buffers": 511, 00:05:40.670 "buf_cache_size": 4294967295, 00:05:40.670 "dif_insert_or_strip": false, 00:05:40.670 "zcopy": false, 00:05:40.670 "c2h_success": true, 00:05:40.670 "sock_priority": 0, 00:05:40.670 "abort_timeout_sec": 1, 00:05:40.670 "ack_timeout": 0, 00:05:40.670 "data_wr_pool_size": 0 00:05:40.670 } 00:05:40.670 } 00:05:40.670 ] 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "subsystem": "nbd", 00:05:40.670 "config": [] 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "subsystem": "ublk", 00:05:40.670 "config": [] 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "subsystem": "vhost_blk", 00:05:40.670 "config": [] 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "subsystem": "scsi", 00:05:40.670 "config": null 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "subsystem": "iscsi", 00:05:40.670 "config": [ 00:05:40.670 { 00:05:40.670 "method": "iscsi_set_options", 00:05:40.670 "params": { 00:05:40.670 "node_base": "iqn.2016-06.io.spdk", 00:05:40.670 "max_sessions": 128, 00:05:40.670 "max_connections_per_session": 2, 00:05:40.670 "max_queue_depth": 64, 00:05:40.670 "default_time2wait": 2, 00:05:40.670 "default_time2retain": 20, 00:05:40.670 "first_burst_length": 8192, 00:05:40.670 "immediate_data": true, 00:05:40.670 "allow_duplicated_isid": false, 00:05:40.670 "error_recovery_level": 0, 00:05:40.670 "nop_timeout": 60, 00:05:40.670 "nop_in_interval": 30, 00:05:40.670 "disable_chap": false, 00:05:40.670 "require_chap": false, 00:05:40.670 "mutual_chap": false, 00:05:40.670 "chap_group": 0, 00:05:40.670 "max_large_datain_per_connection": 64, 00:05:40.670 "max_r2t_per_connection": 4, 00:05:40.670 "pdu_pool_size": 36864, 00:05:40.670 "immediate_data_pool_size": 16384, 00:05:40.670 "data_out_pool_size": 2048 00:05:40.670 } 00:05:40.670 } 00:05:40.670 ] 00:05:40.670 }, 00:05:40.670 { 00:05:40.670 "subsystem": "vhost_scsi", 00:05:40.670 "config": [] 00:05:40.670 } 00:05:40.670 ] 00:05:40.670 } 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1594734 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1594734 ']' 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1594734 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1594734 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1594734' 00:05:40.670 killing process with pid 1594734 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1594734 00:05:40.670 20:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1594734 00:05:41.236 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1594756 00:05:41.236 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:41.236 20:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1594756 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1594756 ']' 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1594756 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1594756 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1594756' 00:05:46.496 killing process with pid 1594756 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1594756 00:05:46.496 20:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1594756 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:46.496 00:05:46.496 real 0m6.282s 00:05:46.496 user 0m5.975s 00:05:46.496 sys 0m0.645s 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.496 ************************************ 00:05:46.496 END TEST skip_rpc_with_json 00:05:46.496 ************************************ 00:05:46.496 20:07:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.496 20:07:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.496 20:07:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.496 20:07:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.496 ************************************ 00:05:46.496 START TEST skip_rpc_with_delay 00:05:46.496 ************************************ 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.496 [2024-11-26 20:07:59.330420] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.496 00:05:46.496 real 0m0.031s 00:05:46.496 user 0m0.015s 00:05:46.496 sys 0m0.016s 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.496 20:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:46.496 ************************************ 00:05:46.496 END TEST skip_rpc_with_delay 00:05:46.496 ************************************ 00:05:46.496 20:07:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:46.496 20:07:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:46.496 20:07:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:46.496 20:07:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.496 20:07:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.496 20:07:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.496 ************************************ 00:05:46.496 START TEST exit_on_failed_rpc_init 00:05:46.496 ************************************ 00:05:46.496 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:46.496 20:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.496 20:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1595871 00:05:46.496 20:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1595871 00:05:46.496 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1595871 ']' 00:05:46.496 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.497 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.497 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.497 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.497 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.754 [2024-11-26 20:07:59.426435] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:46.754 [2024-11-26 20:07:59.426477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595871 ] 00:05:46.754 [2024-11-26 20:07:59.495523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.754 [2024-11-26 20:07:59.538407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.012 [2024-11-26 20:07:59.777868] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:47.012 [2024-11-26 20:07:59.777957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595877 ] 00:05:47.012 [2024-11-26 20:07:59.848880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.012 [2024-11-26 20:07:59.889221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.012 [2024-11-26 20:07:59.889311] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:47.012 [2024-11-26 20:07:59.889324] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:47.012 [2024-11-26 20:07:59.889332] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1595871 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1595871 ']' 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1595871 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.012 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1595871 00:05:47.269 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.269 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.269 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1595871' 00:05:47.269 killing process with pid 1595871 00:05:47.269 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1595871 00:05:47.269 20:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1595871 00:05:47.528 00:05:47.528 real 0m0.881s 00:05:47.528 user 0m0.908s 00:05:47.528 sys 0m0.389s 00:05:47.528 20:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.528 20:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.528 ************************************ 00:05:47.528 END TEST exit_on_failed_rpc_init 00:05:47.528 ************************************ 00:05:47.528 20:08:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:47.528 00:05:47.528 real 0m13.067s 00:05:47.528 user 0m12.254s 00:05:47.528 sys 0m1.647s 00:05:47.528 20:08:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.528 20:08:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.528 ************************************ 00:05:47.528 END TEST skip_rpc 00:05:47.528 ************************************ 00:05:47.528 20:08:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.528 20:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.528 20:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.528 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.528 ************************************ 00:05:47.528 START TEST rpc_client 00:05:47.528 ************************************ 00:05:47.528 20:08:00 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.789 * Looking for test storage... 00:05:47.789 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.789 20:08:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.789 --rc genhtml_branch_coverage=1 00:05:47.789 --rc genhtml_function_coverage=1 00:05:47.789 --rc genhtml_legend=1 00:05:47.789 --rc geninfo_all_blocks=1 00:05:47.789 --rc geninfo_unexecuted_blocks=1 00:05:47.789 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.789 ' 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.789 --rc genhtml_branch_coverage=1 00:05:47.789 --rc genhtml_function_coverage=1 00:05:47.789 --rc genhtml_legend=1 00:05:47.789 --rc geninfo_all_blocks=1 00:05:47.789 --rc geninfo_unexecuted_blocks=1 00:05:47.789 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.789 ' 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.789 --rc genhtml_branch_coverage=1 00:05:47.789 --rc genhtml_function_coverage=1 00:05:47.789 --rc genhtml_legend=1 00:05:47.789 --rc geninfo_all_blocks=1 00:05:47.789 --rc geninfo_unexecuted_blocks=1 00:05:47.789 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.789 ' 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.789 --rc genhtml_branch_coverage=1 00:05:47.789 --rc genhtml_function_coverage=1 00:05:47.789 --rc genhtml_legend=1 00:05:47.789 --rc geninfo_all_blocks=1 00:05:47.789 --rc geninfo_unexecuted_blocks=1 00:05:47.789 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.789 ' 00:05:47.789 20:08:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:47.789 OK 00:05:47.789 20:08:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:47.789 00:05:47.789 real 0m0.218s 00:05:47.789 user 0m0.106s 00:05:47.789 sys 0m0.131s 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.789 20:08:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:47.789 ************************************ 00:05:47.789 END TEST rpc_client 00:05:47.789 ************************************ 00:05:47.789 20:08:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.789 20:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.789 20:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.789 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.789 ************************************ 00:05:47.789 START TEST json_config 00:05:47.789 ************************************ 00:05:47.789 20:08:00 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:48.048 20:08:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.048 20:08:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.048 20:08:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.048 20:08:00 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.048 20:08:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.048 20:08:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.048 20:08:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.048 20:08:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.048 20:08:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.048 20:08:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.048 20:08:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.048 20:08:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.048 20:08:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.048 20:08:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.048 20:08:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.048 20:08:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:48.048 20:08:00 json_config -- scripts/common.sh@345 -- # : 1 00:05:48.048 20:08:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.048 20:08:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.048 20:08:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:48.048 20:08:00 json_config -- scripts/common.sh@353 -- # local d=1 00:05:48.048 20:08:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.049 20:08:00 json_config -- scripts/common.sh@355 -- # echo 1 00:05:48.049 20:08:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.049 20:08:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:48.049 20:08:00 json_config -- scripts/common.sh@353 -- # local d=2 00:05:48.049 20:08:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.049 20:08:00 json_config -- scripts/common.sh@355 -- # echo 2 00:05:48.049 20:08:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.049 20:08:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.049 20:08:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.049 20:08:00 json_config -- scripts/common.sh@368 -- # return 0 00:05:48.049 20:08:00 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.049 20:08:00 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.049 --rc genhtml_branch_coverage=1 00:05:48.049 --rc genhtml_function_coverage=1 00:05:48.049 --rc genhtml_legend=1 00:05:48.049 --rc geninfo_all_blocks=1 00:05:48.049 --rc geninfo_unexecuted_blocks=1 00:05:48.049 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.049 ' 00:05:48.049 20:08:00 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.049 --rc genhtml_branch_coverage=1 00:05:48.049 --rc genhtml_function_coverage=1 00:05:48.049 --rc genhtml_legend=1 00:05:48.049 --rc geninfo_all_blocks=1 00:05:48.049 --rc geninfo_unexecuted_blocks=1 00:05:48.049 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.049 ' 00:05:48.049 20:08:00 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.049 --rc genhtml_branch_coverage=1 00:05:48.049 --rc genhtml_function_coverage=1 00:05:48.049 --rc genhtml_legend=1 00:05:48.049 --rc geninfo_all_blocks=1 00:05:48.049 --rc geninfo_unexecuted_blocks=1 00:05:48.049 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.049 ' 00:05:48.049 20:08:00 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.049 --rc genhtml_branch_coverage=1 00:05:48.049 --rc genhtml_function_coverage=1 00:05:48.049 --rc genhtml_legend=1 00:05:48.049 --rc geninfo_all_blocks=1 00:05:48.049 --rc geninfo_unexecuted_blocks=1 00:05:48.049 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.049 ' 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:48.049 20:08:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.049 20:08:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.049 20:08:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.049 20:08:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.049 20:08:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.049 20:08:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.049 20:08:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.049 20:08:00 json_config -- paths/export.sh@5 -- # export PATH 00:05:48.049 20:08:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@51 -- # : 0 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:48.049 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:48.049 20:08:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:48.049 WARNING: No tests are enabled so not running JSON configuration tests 00:05:48.049 20:08:00 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:48.049 00:05:48.049 real 0m0.190s 00:05:48.049 user 0m0.110s 00:05:48.049 sys 0m0.088s 00:05:48.049 20:08:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.049 20:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.049 ************************************ 00:05:48.049 END TEST json_config 00:05:48.049 ************************************ 00:05:48.049 20:08:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.049 20:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.049 20:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.049 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:48.049 ************************************ 00:05:48.049 START TEST json_config_extra_key 00:05:48.049 ************************************ 00:05:48.309 20:08:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.309 --rc genhtml_branch_coverage=1 00:05:48.309 --rc genhtml_function_coverage=1 00:05:48.309 --rc genhtml_legend=1 00:05:48.309 --rc geninfo_all_blocks=1 00:05:48.309 --rc geninfo_unexecuted_blocks=1 00:05:48.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.309 ' 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.309 --rc genhtml_branch_coverage=1 00:05:48.309 --rc genhtml_function_coverage=1 00:05:48.309 --rc genhtml_legend=1 00:05:48.309 --rc geninfo_all_blocks=1 00:05:48.309 --rc geninfo_unexecuted_blocks=1 00:05:48.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.309 ' 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.309 --rc genhtml_branch_coverage=1 00:05:48.309 --rc genhtml_function_coverage=1 00:05:48.309 --rc genhtml_legend=1 00:05:48.309 --rc geninfo_all_blocks=1 00:05:48.309 --rc geninfo_unexecuted_blocks=1 00:05:48.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.309 ' 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.309 --rc genhtml_branch_coverage=1 00:05:48.309 --rc genhtml_function_coverage=1 00:05:48.309 --rc genhtml_legend=1 00:05:48.309 --rc geninfo_all_blocks=1 00:05:48.309 --rc geninfo_unexecuted_blocks=1 00:05:48.309 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:48.309 ' 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.309 20:08:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.309 20:08:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.309 20:08:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.309 20:08:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.309 20:08:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:48.309 20:08:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:48.309 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:48.309 20:08:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:48.309 INFO: launching applications... 00:05:48.309 20:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1596307 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.309 Waiting for target to run... 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1596307 /var/tmp/spdk_tgt.sock 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1596307 ']' 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.309 20:08:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.309 20:08:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.309 [2024-11-26 20:08:01.202782] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:48.310 [2024-11-26 20:08:01.202851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596307 ] 00:05:48.873 [2024-11-26 20:08:01.644293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.873 [2024-11-26 20:08:01.699951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.131 20:08:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.131 20:08:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:49.131 20:08:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:49.131 00:05:49.131 20:08:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:49.131 INFO: shutting down applications... 00:05:49.131 20:08:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:49.131 20:08:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:49.131 20:08:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:49.131 20:08:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1596307 ]] 00:05:49.131 20:08:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1596307 00:05:49.131 20:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:49.131 20:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.132 20:08:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1596307 00:05:49.132 20:08:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.696 20:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.696 20:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.696 20:08:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1596307 00:05:49.696 20:08:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:49.696 20:08:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:49.696 20:08:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:49.696 20:08:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:49.696 SPDK target shutdown done 00:05:49.696 20:08:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:49.696 Success 00:05:49.696 00:05:49.696 real 0m1.574s 00:05:49.696 user 0m1.148s 00:05:49.696 sys 0m0.581s 00:05:49.696 20:08:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.696 20:08:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.696 ************************************ 00:05:49.696 END TEST json_config_extra_key 00:05:49.696 ************************************ 00:05:49.696 20:08:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:49.696 20:08:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.696 20:08:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.696 20:08:02 -- common/autotest_common.sh@10 -- # set +x 00:05:49.954 ************************************ 00:05:49.954 START TEST alias_rpc 00:05:49.954 ************************************ 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:49.954 * Looking for test storage... 00:05:49.954 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.954 20:08:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.954 --rc genhtml_branch_coverage=1 00:05:49.954 --rc genhtml_function_coverage=1 00:05:49.954 --rc genhtml_legend=1 00:05:49.954 --rc geninfo_all_blocks=1 00:05:49.954 --rc geninfo_unexecuted_blocks=1 00:05:49.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.954 ' 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.954 --rc genhtml_branch_coverage=1 00:05:49.954 --rc genhtml_function_coverage=1 00:05:49.954 --rc genhtml_legend=1 00:05:49.954 --rc geninfo_all_blocks=1 00:05:49.954 --rc geninfo_unexecuted_blocks=1 00:05:49.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.954 ' 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.954 --rc genhtml_branch_coverage=1 00:05:49.954 --rc genhtml_function_coverage=1 00:05:49.954 --rc genhtml_legend=1 00:05:49.954 --rc geninfo_all_blocks=1 00:05:49.954 --rc geninfo_unexecuted_blocks=1 00:05:49.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.954 ' 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.954 --rc genhtml_branch_coverage=1 00:05:49.954 --rc genhtml_function_coverage=1 00:05:49.954 --rc genhtml_legend=1 00:05:49.954 --rc geninfo_all_blocks=1 00:05:49.954 --rc geninfo_unexecuted_blocks=1 00:05:49.954 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.954 ' 00:05:49.954 20:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.954 20:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1596639 00:05:49.954 20:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.954 20:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1596639 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1596639 ']' 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.954 20:08:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.954 [2024-11-26 20:08:02.858142] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:49.954 [2024-11-26 20:08:02.858227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596639 ] 00:05:50.212 [2024-11-26 20:08:02.928724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.212 [2024-11-26 20:08:02.968500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.467 20:08:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.467 20:08:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.467 20:08:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:50.467 20:08:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1596639 00:05:50.467 20:08:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1596639 ']' 00:05:50.467 20:08:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1596639 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596639 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596639' 00:05:50.724 killing process with pid 1596639 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 1596639 00:05:50.724 20:08:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 1596639 00:05:50.982 00:05:50.982 real 0m1.118s 00:05:50.982 user 0m1.108s 00:05:50.982 sys 0m0.452s 00:05:50.982 20:08:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.982 20:08:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.982 ************************************ 00:05:50.982 END TEST alias_rpc 00:05:50.982 ************************************ 00:05:50.982 20:08:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:50.982 20:08:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.982 20:08:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.982 20:08:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.982 20:08:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.982 ************************************ 00:05:50.982 START TEST spdkcli_tcp 00:05:50.982 ************************************ 00:05:50.982 20:08:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.238 * Looking for test storage... 00:05:51.239 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:51.239 20:08:03 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.239 20:08:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.239 20:08:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.239 20:08:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.239 --rc genhtml_branch_coverage=1 00:05:51.239 --rc genhtml_function_coverage=1 00:05:51.239 --rc genhtml_legend=1 00:05:51.239 --rc geninfo_all_blocks=1 00:05:51.239 --rc geninfo_unexecuted_blocks=1 00:05:51.239 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.239 ' 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.239 --rc genhtml_branch_coverage=1 00:05:51.239 --rc genhtml_function_coverage=1 00:05:51.239 --rc genhtml_legend=1 00:05:51.239 --rc geninfo_all_blocks=1 00:05:51.239 --rc geninfo_unexecuted_blocks=1 00:05:51.239 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.239 ' 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.239 --rc genhtml_branch_coverage=1 00:05:51.239 --rc genhtml_function_coverage=1 00:05:51.239 --rc genhtml_legend=1 00:05:51.239 --rc geninfo_all_blocks=1 00:05:51.239 --rc geninfo_unexecuted_blocks=1 00:05:51.239 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.239 ' 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.239 --rc genhtml_branch_coverage=1 00:05:51.239 --rc genhtml_function_coverage=1 00:05:51.239 --rc genhtml_legend=1 00:05:51.239 --rc geninfo_all_blocks=1 00:05:51.239 --rc geninfo_unexecuted_blocks=1 00:05:51.239 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.239 ' 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1596960 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1596960 00:05:51.239 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1596960 ']' 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.239 20:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.239 [2024-11-26 20:08:04.069293] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:51.239 [2024-11-26 20:08:04.069355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596960 ] 00:05:51.239 [2024-11-26 20:08:04.140165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.530 [2024-11-26 20:08:04.181932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.530 [2024-11-26 20:08:04.181934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.530 20:08:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.530 20:08:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:51.530 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1596967 00:05:51.530 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.530 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.788 [ 00:05:51.788 "spdk_get_version", 00:05:51.788 "rpc_get_methods", 00:05:51.788 "notify_get_notifications", 00:05:51.788 "notify_get_types", 00:05:51.788 "trace_get_info", 00:05:51.788 "trace_get_tpoint_group_mask", 00:05:51.788 "trace_disable_tpoint_group", 00:05:51.788 "trace_enable_tpoint_group", 00:05:51.788 "trace_clear_tpoint_mask", 00:05:51.788 "trace_set_tpoint_mask", 00:05:51.788 "fsdev_set_opts", 00:05:51.788 "fsdev_get_opts", 00:05:51.788 "framework_get_pci_devices", 00:05:51.788 "framework_get_config", 00:05:51.788 "framework_get_subsystems", 00:05:51.788 "vfu_tgt_set_base_path", 00:05:51.788 "keyring_get_keys", 00:05:51.788 "iobuf_get_stats", 00:05:51.788 "iobuf_set_options", 00:05:51.788 "sock_get_default_impl", 00:05:51.788 "sock_set_default_impl", 00:05:51.788 "sock_impl_set_options", 00:05:51.788 "sock_impl_get_options", 00:05:51.788 "vmd_rescan", 00:05:51.788 "vmd_remove_device", 00:05:51.788 "vmd_enable", 00:05:51.788 "accel_get_stats", 00:05:51.788 "accel_set_options", 00:05:51.788 "accel_set_driver", 00:05:51.788 "accel_crypto_key_destroy", 00:05:51.788 "accel_crypto_keys_get", 00:05:51.788 "accel_crypto_key_create", 00:05:51.788 "accel_assign_opc", 00:05:51.788 "accel_get_module_info", 00:05:51.788 "accel_get_opc_assignments", 00:05:51.788 "bdev_get_histogram", 00:05:51.788 "bdev_enable_histogram", 00:05:51.788 "bdev_set_qos_limit", 00:05:51.788 "bdev_set_qd_sampling_period", 00:05:51.788 "bdev_get_bdevs", 00:05:51.788 "bdev_reset_iostat", 00:05:51.788 "bdev_get_iostat", 00:05:51.788 "bdev_examine", 00:05:51.788 "bdev_wait_for_examine", 00:05:51.788 "bdev_set_options", 00:05:51.788 "scsi_get_devices", 00:05:51.788 "thread_set_cpumask", 00:05:51.788 "scheduler_set_options", 00:05:51.788 "framework_get_governor", 00:05:51.788 "framework_get_scheduler", 00:05:51.788 "framework_set_scheduler", 00:05:51.788 "framework_get_reactors", 00:05:51.788 "thread_get_io_channels", 00:05:51.788 "thread_get_pollers", 00:05:51.788 "thread_get_stats", 00:05:51.788 "framework_monitor_context_switch", 00:05:51.788 "spdk_kill_instance", 00:05:51.788 "log_enable_timestamps", 00:05:51.788 "log_get_flags", 00:05:51.788 "log_clear_flag", 00:05:51.788 "log_set_flag", 00:05:51.788 "log_get_level", 00:05:51.788 "log_set_level", 00:05:51.788 "log_get_print_level", 00:05:51.788 "log_set_print_level", 00:05:51.788 "framework_enable_cpumask_locks", 00:05:51.788 "framework_disable_cpumask_locks", 00:05:51.788 "framework_wait_init", 00:05:51.788 "framework_start_init", 00:05:51.788 "virtio_blk_create_transport", 00:05:51.788 "virtio_blk_get_transports", 00:05:51.788 "vhost_controller_set_coalescing", 00:05:51.788 "vhost_get_controllers", 00:05:51.788 "vhost_delete_controller", 00:05:51.788 "vhost_create_blk_controller", 00:05:51.788 "vhost_scsi_controller_remove_target", 00:05:51.788 "vhost_scsi_controller_add_target", 00:05:51.788 "vhost_start_scsi_controller", 00:05:51.788 "vhost_create_scsi_controller", 00:05:51.788 "ublk_recover_disk", 00:05:51.788 "ublk_get_disks", 00:05:51.788 "ublk_stop_disk", 00:05:51.788 "ublk_start_disk", 00:05:51.788 "ublk_destroy_target", 00:05:51.788 "ublk_create_target", 00:05:51.788 "nbd_get_disks", 00:05:51.788 "nbd_stop_disk", 00:05:51.788 "nbd_start_disk", 00:05:51.788 "env_dpdk_get_mem_stats", 00:05:51.788 "nvmf_stop_mdns_prr", 00:05:51.788 "nvmf_publish_mdns_prr", 00:05:51.788 "nvmf_subsystem_get_listeners", 00:05:51.788 "nvmf_subsystem_get_qpairs", 00:05:51.788 "nvmf_subsystem_get_controllers", 00:05:51.788 "nvmf_get_stats", 00:05:51.788 "nvmf_get_transports", 00:05:51.788 "nvmf_create_transport", 00:05:51.788 "nvmf_get_targets", 00:05:51.788 "nvmf_delete_target", 00:05:51.788 "nvmf_create_target", 00:05:51.788 "nvmf_subsystem_allow_any_host", 00:05:51.788 "nvmf_subsystem_set_keys", 00:05:51.788 "nvmf_subsystem_remove_host", 00:05:51.788 "nvmf_subsystem_add_host", 00:05:51.788 "nvmf_ns_remove_host", 00:05:51.788 "nvmf_ns_add_host", 00:05:51.788 "nvmf_subsystem_remove_ns", 00:05:51.788 "nvmf_subsystem_set_ns_ana_group", 00:05:51.788 "nvmf_subsystem_add_ns", 00:05:51.788 "nvmf_subsystem_listener_set_ana_state", 00:05:51.788 "nvmf_discovery_get_referrals", 00:05:51.788 "nvmf_discovery_remove_referral", 00:05:51.788 "nvmf_discovery_add_referral", 00:05:51.788 "nvmf_subsystem_remove_listener", 00:05:51.788 "nvmf_subsystem_add_listener", 00:05:51.788 "nvmf_delete_subsystem", 00:05:51.788 "nvmf_create_subsystem", 00:05:51.788 "nvmf_get_subsystems", 00:05:51.788 "nvmf_set_crdt", 00:05:51.788 "nvmf_set_config", 00:05:51.788 "nvmf_set_max_subsystems", 00:05:51.788 "iscsi_get_histogram", 00:05:51.788 "iscsi_enable_histogram", 00:05:51.788 "iscsi_set_options", 00:05:51.788 "iscsi_get_auth_groups", 00:05:51.788 "iscsi_auth_group_remove_secret", 00:05:51.788 "iscsi_auth_group_add_secret", 00:05:51.788 "iscsi_delete_auth_group", 00:05:51.788 "iscsi_create_auth_group", 00:05:51.788 "iscsi_set_discovery_auth", 00:05:51.788 "iscsi_get_options", 00:05:51.788 "iscsi_target_node_request_logout", 00:05:51.788 "iscsi_target_node_set_redirect", 00:05:51.788 "iscsi_target_node_set_auth", 00:05:51.788 "iscsi_target_node_add_lun", 00:05:51.788 "iscsi_get_stats", 00:05:51.788 "iscsi_get_connections", 00:05:51.788 "iscsi_portal_group_set_auth", 00:05:51.788 "iscsi_start_portal_group", 00:05:51.788 "iscsi_delete_portal_group", 00:05:51.788 "iscsi_create_portal_group", 00:05:51.788 "iscsi_get_portal_groups", 00:05:51.788 "iscsi_delete_target_node", 00:05:51.788 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.788 "iscsi_target_node_add_pg_ig_maps", 00:05:51.788 "iscsi_create_target_node", 00:05:51.788 "iscsi_get_target_nodes", 00:05:51.788 "iscsi_delete_initiator_group", 00:05:51.788 "iscsi_initiator_group_remove_initiators", 00:05:51.788 "iscsi_initiator_group_add_initiators", 00:05:51.788 "iscsi_create_initiator_group", 00:05:51.788 "iscsi_get_initiator_groups", 00:05:51.788 "fsdev_aio_delete", 00:05:51.788 "fsdev_aio_create", 00:05:51.788 "keyring_linux_set_options", 00:05:51.788 "keyring_file_remove_key", 00:05:51.788 "keyring_file_add_key", 00:05:51.788 "vfu_virtio_create_fs_endpoint", 00:05:51.788 "vfu_virtio_create_scsi_endpoint", 00:05:51.788 "vfu_virtio_scsi_remove_target", 00:05:51.788 "vfu_virtio_scsi_add_target", 00:05:51.788 "vfu_virtio_create_blk_endpoint", 00:05:51.788 "vfu_virtio_delete_endpoint", 00:05:51.788 "iaa_scan_accel_module", 00:05:51.788 "dsa_scan_accel_module", 00:05:51.788 "ioat_scan_accel_module", 00:05:51.788 "accel_error_inject_error", 00:05:51.788 "bdev_iscsi_delete", 00:05:51.788 "bdev_iscsi_create", 00:05:51.788 "bdev_iscsi_set_options", 00:05:51.788 "bdev_virtio_attach_controller", 00:05:51.788 "bdev_virtio_scsi_get_devices", 00:05:51.788 "bdev_virtio_detach_controller", 00:05:51.788 "bdev_virtio_blk_set_hotplug", 00:05:51.788 "bdev_ftl_set_property", 00:05:51.788 "bdev_ftl_get_properties", 00:05:51.788 "bdev_ftl_get_stats", 00:05:51.788 "bdev_ftl_unmap", 00:05:51.788 "bdev_ftl_unload", 00:05:51.788 "bdev_ftl_delete", 00:05:51.788 "bdev_ftl_load", 00:05:51.788 "bdev_ftl_create", 00:05:51.788 "bdev_aio_delete", 00:05:51.788 "bdev_aio_rescan", 00:05:51.788 "bdev_aio_create", 00:05:51.788 "blobfs_create", 00:05:51.788 "blobfs_detect", 00:05:51.788 "blobfs_set_cache_size", 00:05:51.788 "bdev_zone_block_delete", 00:05:51.788 "bdev_zone_block_create", 00:05:51.788 "bdev_delay_delete", 00:05:51.788 "bdev_delay_create", 00:05:51.788 "bdev_delay_update_latency", 00:05:51.788 "bdev_split_delete", 00:05:51.788 "bdev_split_create", 00:05:51.788 "bdev_error_inject_error", 00:05:51.788 "bdev_error_delete", 00:05:51.788 "bdev_error_create", 00:05:51.788 "bdev_raid_set_options", 00:05:51.788 "bdev_raid_remove_base_bdev", 00:05:51.788 "bdev_raid_add_base_bdev", 00:05:51.788 "bdev_raid_delete", 00:05:51.788 "bdev_raid_create", 00:05:51.788 "bdev_raid_get_bdevs", 00:05:51.788 "bdev_lvol_set_parent_bdev", 00:05:51.788 "bdev_lvol_set_parent", 00:05:51.788 "bdev_lvol_check_shallow_copy", 00:05:51.788 "bdev_lvol_start_shallow_copy", 00:05:51.788 "bdev_lvol_grow_lvstore", 00:05:51.788 "bdev_lvol_get_lvols", 00:05:51.789 "bdev_lvol_get_lvstores", 00:05:51.789 "bdev_lvol_delete", 00:05:51.789 "bdev_lvol_set_read_only", 00:05:51.789 "bdev_lvol_resize", 00:05:51.789 "bdev_lvol_decouple_parent", 00:05:51.789 "bdev_lvol_inflate", 00:05:51.789 "bdev_lvol_rename", 00:05:51.789 "bdev_lvol_clone_bdev", 00:05:51.789 "bdev_lvol_clone", 00:05:51.789 "bdev_lvol_snapshot", 00:05:51.789 "bdev_lvol_create", 00:05:51.789 "bdev_lvol_delete_lvstore", 00:05:51.789 "bdev_lvol_rename_lvstore", 00:05:51.789 "bdev_lvol_create_lvstore", 00:05:51.789 "bdev_passthru_delete", 00:05:51.789 "bdev_passthru_create", 00:05:51.789 "bdev_nvme_cuse_unregister", 00:05:51.789 "bdev_nvme_cuse_register", 00:05:51.789 "bdev_opal_new_user", 00:05:51.789 "bdev_opal_set_lock_state", 00:05:51.789 "bdev_opal_delete", 00:05:51.789 "bdev_opal_get_info", 00:05:51.789 "bdev_opal_create", 00:05:51.789 "bdev_nvme_opal_revert", 00:05:51.789 "bdev_nvme_opal_init", 00:05:51.789 "bdev_nvme_send_cmd", 00:05:51.789 "bdev_nvme_set_keys", 00:05:51.789 "bdev_nvme_get_path_iostat", 00:05:51.789 "bdev_nvme_get_mdns_discovery_info", 00:05:51.789 "bdev_nvme_stop_mdns_discovery", 00:05:51.789 "bdev_nvme_start_mdns_discovery", 00:05:51.789 "bdev_nvme_set_multipath_policy", 00:05:51.789 "bdev_nvme_set_preferred_path", 00:05:51.789 "bdev_nvme_get_io_paths", 00:05:51.789 "bdev_nvme_remove_error_injection", 00:05:51.789 "bdev_nvme_add_error_injection", 00:05:51.789 "bdev_nvme_get_discovery_info", 00:05:51.789 "bdev_nvme_stop_discovery", 00:05:51.789 "bdev_nvme_start_discovery", 00:05:51.789 "bdev_nvme_get_controller_health_info", 00:05:51.789 "bdev_nvme_disable_controller", 00:05:51.789 "bdev_nvme_enable_controller", 00:05:51.789 "bdev_nvme_reset_controller", 00:05:51.789 "bdev_nvme_get_transport_statistics", 00:05:51.789 "bdev_nvme_apply_firmware", 00:05:51.789 "bdev_nvme_detach_controller", 00:05:51.789 "bdev_nvme_get_controllers", 00:05:51.789 "bdev_nvme_attach_controller", 00:05:51.789 "bdev_nvme_set_hotplug", 00:05:51.789 "bdev_nvme_set_options", 00:05:51.789 "bdev_null_resize", 00:05:51.789 "bdev_null_delete", 00:05:51.789 "bdev_null_create", 00:05:51.789 "bdev_malloc_delete", 00:05:51.789 "bdev_malloc_create" 00:05:51.789 ] 00:05:51.789 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.789 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.789 20:08:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1596960 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1596960 ']' 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1596960 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596960 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596960' 00:05:51.789 killing process with pid 1596960 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1596960 00:05:51.789 20:08:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1596960 00:05:52.045 00:05:52.045 real 0m1.131s 00:05:52.045 user 0m1.861s 00:05:52.045 sys 0m0.490s 00:05:52.045 20:08:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.045 20:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.045 ************************************ 00:05:52.045 END TEST spdkcli_tcp 00:05:52.045 ************************************ 00:05:52.301 20:08:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:52.301 20:08:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.301 20:08:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.301 20:08:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.301 ************************************ 00:05:52.301 START TEST dpdk_mem_utility 00:05:52.301 ************************************ 00:05:52.301 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:52.301 * Looking for test storage... 00:05:52.301 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:52.301 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.301 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.301 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.558 20:08:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.558 --rc genhtml_branch_coverage=1 00:05:52.558 --rc genhtml_function_coverage=1 00:05:52.558 --rc genhtml_legend=1 00:05:52.558 --rc geninfo_all_blocks=1 00:05:52.558 --rc geninfo_unexecuted_blocks=1 00:05:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.558 ' 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.558 --rc genhtml_branch_coverage=1 00:05:52.558 --rc genhtml_function_coverage=1 00:05:52.558 --rc genhtml_legend=1 00:05:52.558 --rc geninfo_all_blocks=1 00:05:52.558 --rc geninfo_unexecuted_blocks=1 00:05:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.558 ' 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.558 --rc genhtml_branch_coverage=1 00:05:52.558 --rc genhtml_function_coverage=1 00:05:52.558 --rc genhtml_legend=1 00:05:52.558 --rc geninfo_all_blocks=1 00:05:52.558 --rc geninfo_unexecuted_blocks=1 00:05:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.558 ' 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.558 --rc genhtml_branch_coverage=1 00:05:52.558 --rc genhtml_function_coverage=1 00:05:52.558 --rc genhtml_legend=1 00:05:52.558 --rc geninfo_all_blocks=1 00:05:52.558 --rc geninfo_unexecuted_blocks=1 00:05:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.558 ' 00:05:52.558 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.558 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1597298 00:05:52.558 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1597298 00:05:52.558 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1597298 ']' 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.558 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.558 [2024-11-26 20:08:05.279228] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:52.558 [2024-11-26 20:08:05.279306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597298 ] 00:05:52.558 [2024-11-26 20:08:05.353166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.558 [2024-11-26 20:08:05.395783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.816 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.816 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:52.816 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.816 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.816 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.816 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.816 { 00:05:52.816 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.816 } 00:05:52.816 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.816 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.816 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:52.816 1 heaps totaling size 818.000000 MiB 00:05:52.816 size: 818.000000 MiB heap id: 0 00:05:52.816 end heaps---------- 00:05:52.816 9 mempools totaling size 603.782043 MiB 00:05:52.816 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.816 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.816 size: 100.555481 MiB name: bdev_io_1597298 00:05:52.816 size: 50.003479 MiB name: msgpool_1597298 00:05:52.816 size: 36.509338 MiB name: fsdev_io_1597298 00:05:52.816 size: 21.763794 MiB name: PDU_Pool 00:05:52.816 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.816 size: 4.133484 MiB name: evtpool_1597298 00:05:52.816 size: 0.026123 MiB name: Session_Pool 00:05:52.816 end mempools------- 00:05:52.816 6 memzones totaling size 4.142822 MiB 00:05:52.816 size: 1.000366 MiB name: RG_ring_0_1597298 00:05:52.816 size: 1.000366 MiB name: RG_ring_1_1597298 00:05:52.816 size: 1.000366 MiB name: RG_ring_4_1597298 00:05:52.816 size: 1.000366 MiB name: RG_ring_5_1597298 00:05:52.816 size: 0.125366 MiB name: RG_ring_2_1597298 00:05:52.816 size: 0.015991 MiB name: RG_ring_3_1597298 00:05:52.816 end memzones------- 00:05:52.816 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.816 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:52.816 list of free elements. size: 10.852478 MiB 00:05:52.816 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:52.816 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:52.816 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:52.816 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:52.816 element at address: 0x200008000000 with size: 0.959839 MiB 00:05:52.816 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:52.816 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:52.816 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:52.816 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:52.816 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:52.816 element at address: 0x200003e00000 with size: 0.490723 MiB 00:05:52.816 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:52.816 element at address: 0x200010600000 with size: 0.481934 MiB 00:05:52.816 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:52.816 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:52.816 list of standard malloc elements. size: 199.218628 MiB 00:05:52.816 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:05:52.816 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:05:52.816 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:52.816 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:52.816 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:52.816 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:52.816 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:52.816 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:52.816 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:52.816 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20000085b100 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000008df880 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20001067b600 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:52.816 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:52.816 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:52.816 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:52.816 list of memzone associated elements. size: 607.928894 MiB 00:05:52.816 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:52.816 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.816 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:52.816 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.816 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:52.816 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1597298_0 00:05:52.816 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:52.816 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1597298_0 00:05:52.816 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:05:52.816 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1597298_0 00:05:52.816 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:52.816 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.816 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:52.816 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.816 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:52.816 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1597298_0 00:05:52.816 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:52.816 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1597298 00:05:52.816 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:52.816 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1597298 00:05:52.816 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:05:52.816 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.816 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:52.816 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.816 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:05:52.816 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.816 element at address: 0x200003efde40 with size: 1.008118 MiB 00:05:52.816 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.816 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:52.816 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1597298 00:05:52.816 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:52.816 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1597298 00:05:52.816 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:52.816 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1597298 00:05:52.816 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:52.816 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1597298 00:05:52.816 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:05:52.816 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1597298 00:05:52.816 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:52.816 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1597298 00:05:52.816 element at address: 0x20001067b780 with size: 0.500488 MiB 00:05:52.816 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.816 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:05:52.816 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.816 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:52.816 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.816 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:52.816 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1597298 00:05:52.816 element at address: 0x2000008df940 with size: 0.125488 MiB 00:05:52.816 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1597298 00:05:52.816 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:05:52.816 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.816 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:52.816 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.816 element at address: 0x2000008db680 with size: 0.016113 MiB 00:05:52.816 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1597298 00:05:52.816 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:52.816 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.816 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:52.816 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1597298 00:05:52.816 element at address: 0x2000008db480 with size: 0.000305 MiB 00:05:52.816 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1597298 00:05:52.816 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:52.816 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1597298 00:05:52.816 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:52.816 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.816 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.816 20:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1597298 00:05:52.816 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1597298 ']' 00:05:52.816 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1597298 00:05:52.817 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:52.817 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.817 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1597298 00:05:53.125 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.125 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.125 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1597298' 00:05:53.125 killing process with pid 1597298 00:05:53.125 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1597298 00:05:53.125 20:08:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1597298 00:05:53.411 00:05:53.411 real 0m1.003s 00:05:53.411 user 0m0.914s 00:05:53.411 sys 0m0.440s 00:05:53.411 20:08:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.411 20:08:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.411 ************************************ 00:05:53.411 END TEST dpdk_mem_utility 00:05:53.411 ************************************ 00:05:53.411 20:08:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:53.411 20:08:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.411 20:08:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.411 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:53.411 ************************************ 00:05:53.411 START TEST event 00:05:53.411 ************************************ 00:05:53.411 20:08:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:53.411 * Looking for test storage... 00:05:53.411 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:53.411 20:08:06 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.411 20:08:06 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.411 20:08:06 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.411 20:08:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.411 20:08:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.411 20:08:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.411 20:08:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.411 20:08:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.411 20:08:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.411 20:08:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.411 20:08:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.411 20:08:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.411 20:08:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.411 20:08:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.411 20:08:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.411 20:08:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:53.411 20:08:06 event -- scripts/common.sh@345 -- # : 1 00:05:53.411 20:08:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.411 20:08:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.411 20:08:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:53.411 20:08:06 event -- scripts/common.sh@353 -- # local d=1 00:05:53.411 20:08:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.411 20:08:06 event -- scripts/common.sh@355 -- # echo 1 00:05:53.411 20:08:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.411 20:08:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:53.680 20:08:06 event -- scripts/common.sh@353 -- # local d=2 00:05:53.680 20:08:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.680 20:08:06 event -- scripts/common.sh@355 -- # echo 2 00:05:53.680 20:08:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.680 20:08:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.680 20:08:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.680 20:08:06 event -- scripts/common.sh@368 -- # return 0 00:05:53.680 20:08:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.680 20:08:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.680 --rc genhtml_branch_coverage=1 00:05:53.680 --rc genhtml_function_coverage=1 00:05:53.680 --rc genhtml_legend=1 00:05:53.680 --rc geninfo_all_blocks=1 00:05:53.680 --rc geninfo_unexecuted_blocks=1 00:05:53.680 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.680 ' 00:05:53.680 20:08:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.680 --rc genhtml_branch_coverage=1 00:05:53.680 --rc genhtml_function_coverage=1 00:05:53.680 --rc genhtml_legend=1 00:05:53.680 --rc geninfo_all_blocks=1 00:05:53.680 --rc geninfo_unexecuted_blocks=1 00:05:53.680 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.680 ' 00:05:53.680 20:08:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.680 --rc genhtml_branch_coverage=1 00:05:53.680 --rc genhtml_function_coverage=1 00:05:53.680 --rc genhtml_legend=1 00:05:53.680 --rc geninfo_all_blocks=1 00:05:53.680 --rc geninfo_unexecuted_blocks=1 00:05:53.680 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.680 ' 00:05:53.680 20:08:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.680 --rc genhtml_branch_coverage=1 00:05:53.680 --rc genhtml_function_coverage=1 00:05:53.680 --rc genhtml_legend=1 00:05:53.680 --rc geninfo_all_blocks=1 00:05:53.680 --rc geninfo_unexecuted_blocks=1 00:05:53.680 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.680 ' 00:05:53.680 20:08:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:53.680 20:08:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.680 20:08:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.680 20:08:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:53.680 20:08:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.680 20:08:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.680 ************************************ 00:05:53.680 START TEST event_perf 00:05:53.680 ************************************ 00:05:53.680 20:08:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.680 Running I/O for 1 seconds...[2024-11-26 20:08:06.399495] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:53.680 [2024-11-26 20:08:06.399573] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597486 ] 00:05:53.680 [2024-11-26 20:08:06.474338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.680 [2024-11-26 20:08:06.520427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.680 [2024-11-26 20:08:06.520535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.680 [2024-11-26 20:08:06.520627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.680 [2024-11-26 20:08:06.520629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.065 Running I/O for 1 seconds... 00:05:55.065 lcore 0: 193886 00:05:55.065 lcore 1: 193887 00:05:55.065 lcore 2: 193888 00:05:55.065 lcore 3: 193887 00:05:55.065 done. 00:05:55.065 00:05:55.065 real 0m1.172s 00:05:55.065 user 0m4.090s 00:05:55.065 sys 0m0.080s 00:05:55.065 20:08:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.065 20:08:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.065 ************************************ 00:05:55.065 END TEST event_perf 00:05:55.065 ************************************ 00:05:55.065 20:08:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.065 20:08:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:55.065 20:08:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.065 20:08:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.065 ************************************ 00:05:55.065 START TEST event_reactor 00:05:55.065 ************************************ 00:05:55.065 20:08:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.065 [2024-11-26 20:08:07.652230] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:55.065 [2024-11-26 20:08:07.652311] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597676 ] 00:05:55.065 [2024-11-26 20:08:07.726567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.065 [2024-11-26 20:08:07.766356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.000 test_start 00:05:56.000 oneshot 00:05:56.000 tick 100 00:05:56.000 tick 100 00:05:56.000 tick 250 00:05:56.000 tick 100 00:05:56.000 tick 100 00:05:56.000 tick 100 00:05:56.000 tick 250 00:05:56.000 tick 500 00:05:56.000 tick 100 00:05:56.000 tick 100 00:05:56.000 tick 250 00:05:56.000 tick 100 00:05:56.000 tick 100 00:05:56.000 test_end 00:05:56.000 00:05:56.000 real 0m1.166s 00:05:56.000 user 0m1.080s 00:05:56.000 sys 0m0.082s 00:05:56.000 20:08:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.000 20:08:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.000 ************************************ 00:05:56.000 END TEST event_reactor 00:05:56.000 ************************************ 00:05:56.000 20:08:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.000 20:08:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:56.000 20:08:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.000 20:08:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.000 ************************************ 00:05:56.000 START TEST event_reactor_perf 00:05:56.000 ************************************ 00:05:56.000 20:08:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.000 [2024-11-26 20:08:08.891577] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:56.000 [2024-11-26 20:08:08.891703] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597956 ] 00:05:56.258 [2024-11-26 20:08:08.964348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.258 [2024-11-26 20:08:09.003602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.195 test_start 00:05:57.195 test_end 00:05:57.195 Performance: 956223 events per second 00:05:57.195 00:05:57.195 real 0m1.167s 00:05:57.195 user 0m1.086s 00:05:57.195 sys 0m0.077s 00:05:57.195 20:08:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.195 20:08:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.195 ************************************ 00:05:57.195 END TEST event_reactor_perf 00:05:57.195 ************************************ 00:05:57.195 20:08:10 event -- event/event.sh@49 -- # uname -s 00:05:57.195 20:08:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:57.195 20:08:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:57.195 20:08:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.195 20:08:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.195 20:08:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 START TEST event_scheduler 00:05:57.460 ************************************ 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:57.460 * Looking for test storage... 00:05:57.460 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.460 20:08:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.460 --rc genhtml_branch_coverage=1 00:05:57.460 --rc genhtml_function_coverage=1 00:05:57.460 --rc genhtml_legend=1 00:05:57.460 --rc geninfo_all_blocks=1 00:05:57.460 --rc geninfo_unexecuted_blocks=1 00:05:57.460 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.460 ' 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.460 --rc genhtml_branch_coverage=1 00:05:57.460 --rc genhtml_function_coverage=1 00:05:57.460 --rc genhtml_legend=1 00:05:57.460 --rc geninfo_all_blocks=1 00:05:57.460 --rc geninfo_unexecuted_blocks=1 00:05:57.460 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.460 ' 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.460 --rc genhtml_branch_coverage=1 00:05:57.460 --rc genhtml_function_coverage=1 00:05:57.460 --rc genhtml_legend=1 00:05:57.460 --rc geninfo_all_blocks=1 00:05:57.460 --rc geninfo_unexecuted_blocks=1 00:05:57.460 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.460 ' 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.460 --rc genhtml_branch_coverage=1 00:05:57.460 --rc genhtml_function_coverage=1 00:05:57.460 --rc genhtml_legend=1 00:05:57.460 --rc geninfo_all_blocks=1 00:05:57.460 --rc geninfo_unexecuted_blocks=1 00:05:57.460 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.460 ' 00:05:57.460 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.460 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1598273 00:05:57.460 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.460 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.460 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1598273 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1598273 ']' 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.460 20:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 [2024-11-26 20:08:10.348167] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:05:57.460 [2024-11-26 20:08:10.348242] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598273 ] 00:05:57.718 [2024-11-26 20:08:10.416977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.718 [2024-11-26 20:08:10.464377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.718 [2024-11-26 20:08:10.464464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.718 [2024-11-26 20:08:10.464546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.718 [2024-11-26 20:08:10.464548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:57.718 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.718 [2024-11-26 20:08:10.525209] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:57.718 [2024-11-26 20:08:10.525230] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.718 [2024-11-26 20:08:10.525242] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.718 [2024-11-26 20:08:10.525249] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.718 [2024-11-26 20:08:10.525256] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.718 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.718 [2024-11-26 20:08:10.600753] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.718 20:08:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.718 20:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.718 ************************************ 00:05:57.718 START TEST scheduler_create_thread 00:05:57.718 ************************************ 00:05:57.718 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:57.718 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.718 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.718 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 2 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 3 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 4 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 5 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 6 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 7 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 8 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 9 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 10 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.978 20:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.547 20:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.547 20:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.547 20:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.547 20:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.921 20:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.921 20:08:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:59.921 20:08:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:59.921 20:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.921 20:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 20:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.856 00:06:00.856 real 0m3.099s 00:06:00.856 user 0m0.027s 00:06:00.856 sys 0m0.005s 00:06:00.856 20:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.856 20:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 ************************************ 00:06:00.856 END TEST scheduler_create_thread 00:06:00.856 ************************************ 00:06:00.856 20:08:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:00.856 20:08:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1598273 00:06:00.856 20:08:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1598273 ']' 00:06:00.856 20:08:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1598273 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1598273 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1598273' 00:06:01.114 killing process with pid 1598273 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1598273 00:06:01.114 20:08:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1598273 00:06:01.373 [2024-11-26 20:08:14.119916] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.632 00:06:01.632 real 0m4.169s 00:06:01.632 user 0m6.673s 00:06:01.632 sys 0m0.428s 00:06:01.632 20:08:14 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.632 20:08:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.632 ************************************ 00:06:01.632 END TEST event_scheduler 00:06:01.632 ************************************ 00:06:01.632 20:08:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.632 20:08:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.632 20:08:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.632 20:08:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.632 20:08:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.632 ************************************ 00:06:01.632 START TEST app_repeat 00:06:01.632 ************************************ 00:06:01.632 20:08:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1599115 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1599115' 00:06:01.632 Process app_repeat pid: 1599115 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.632 spdk_app_start Round 0 00:06:01.632 20:08:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1599115 /var/tmp/spdk-nbd.sock 00:06:01.632 20:08:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1599115 ']' 00:06:01.632 20:08:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.632 20:08:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.632 20:08:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.632 20:08:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.632 20:08:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.632 [2024-11-26 20:08:14.426351] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:01.632 [2024-11-26 20:08:14.426436] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599115 ] 00:06:01.632 [2024-11-26 20:08:14.499334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.632 [2024-11-26 20:08:14.539697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.632 [2024-11-26 20:08:14.539701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.891 20:08:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.891 20:08:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.891 20:08:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.891 Malloc0 00:06:01.891 20:08:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.150 Malloc1 00:06:02.150 20:08:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.150 20:08:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.408 /dev/nbd0 00:06:02.409 20:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.409 20:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.409 1+0 records in 00:06:02.409 1+0 records out 00:06:02.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022343 s, 18.3 MB/s 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.409 20:08:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.409 20:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.409 20:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.409 20:08:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.667 /dev/nbd1 00:06:02.667 20:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.667 20:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.667 1+0 records in 00:06:02.667 1+0 records out 00:06:02.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233343 s, 17.6 MB/s 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.667 20:08:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.667 20:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.667 20:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.667 20:08:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.667 20:08:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.667 20:08:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.926 { 00:06:02.926 "nbd_device": "/dev/nbd0", 00:06:02.926 "bdev_name": "Malloc0" 00:06:02.926 }, 00:06:02.926 { 00:06:02.926 "nbd_device": "/dev/nbd1", 00:06:02.926 "bdev_name": "Malloc1" 00:06:02.926 } 00:06:02.926 ]' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.926 { 00:06:02.926 "nbd_device": "/dev/nbd0", 00:06:02.926 "bdev_name": "Malloc0" 00:06:02.926 }, 00:06:02.926 { 00:06:02.926 "nbd_device": "/dev/nbd1", 00:06:02.926 "bdev_name": "Malloc1" 00:06:02.926 } 00:06:02.926 ]' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.926 /dev/nbd1' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.926 /dev/nbd1' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.926 256+0 records in 00:06:02.926 256+0 records out 00:06:02.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0093358 s, 112 MB/s 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.926 256+0 records in 00:06:02.926 256+0 records out 00:06:02.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020015 s, 52.4 MB/s 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.926 256+0 records in 00:06:02.926 256+0 records out 00:06:02.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214886 s, 48.8 MB/s 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.926 20:08:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.185 20:08:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.443 20:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.702 20:08:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.702 20:08:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.964 20:08:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.964 [2024-11-26 20:08:16.848464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.964 [2024-11-26 20:08:16.884913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.964 [2024-11-26 20:08:16.884914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.222 [2024-11-26 20:08:16.925191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.222 [2024-11-26 20:08:16.925239] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.507 20:08:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.507 20:08:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.507 spdk_app_start Round 1 00:06:07.507 20:08:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1599115 /var/tmp/spdk-nbd.sock 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1599115 ']' 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.507 20:08:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.507 20:08:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.507 Malloc0 00:06:07.507 20:08:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.507 Malloc1 00:06:07.507 20:08:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.507 20:08:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.507 20:08:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.507 20:08:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.508 20:08:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.766 /dev/nbd0 00:06:07.766 20:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.766 20:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.766 1+0 records in 00:06:07.766 1+0 records out 00:06:07.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241111 s, 17.0 MB/s 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.766 20:08:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.766 20:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.766 20:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.766 20:08:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.025 /dev/nbd1 00:06:08.025 20:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.025 20:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.025 1+0 records in 00:06:08.025 1+0 records out 00:06:08.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225387 s, 18.2 MB/s 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.025 20:08:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:08.025 20:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.025 20:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.025 20:08:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.025 20:08:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.025 20:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.283 20:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.283 { 00:06:08.283 "nbd_device": "/dev/nbd0", 00:06:08.283 "bdev_name": "Malloc0" 00:06:08.283 }, 00:06:08.283 { 00:06:08.283 "nbd_device": "/dev/nbd1", 00:06:08.283 "bdev_name": "Malloc1" 00:06:08.283 } 00:06:08.283 ]' 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.283 { 00:06:08.283 "nbd_device": "/dev/nbd0", 00:06:08.283 "bdev_name": "Malloc0" 00:06:08.283 }, 00:06:08.283 { 00:06:08.283 "nbd_device": "/dev/nbd1", 00:06:08.283 "bdev_name": "Malloc1" 00:06:08.283 } 00:06:08.283 ]' 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.283 /dev/nbd1' 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.283 /dev/nbd1' 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.283 20:08:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.284 256+0 records in 00:06:08.284 256+0 records out 00:06:08.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112073 s, 93.6 MB/s 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.284 256+0 records in 00:06:08.284 256+0 records out 00:06:08.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020292 s, 51.7 MB/s 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.284 256+0 records in 00:06:08.284 256+0 records out 00:06:08.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212493 s, 49.3 MB/s 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.284 20:08:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.542 20:08:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.801 20:08:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.061 20:08:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.061 20:08:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.320 20:08:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.320 [2024-11-26 20:08:22.175548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.320 [2024-11-26 20:08:22.212042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.320 [2024-11-26 20:08:22.212045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.579 [2024-11-26 20:08:22.253223] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.579 [2024-11-26 20:08:22.253268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.115 20:08:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.115 20:08:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:12.115 spdk_app_start Round 2 00:06:12.115 20:08:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1599115 /var/tmp/spdk-nbd.sock 00:06:12.115 20:08:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1599115 ']' 00:06:12.115 20:08:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.115 20:08:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.115 20:08:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.115 20:08:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.115 20:08:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.373 20:08:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.373 20:08:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.373 20:08:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.631 Malloc0 00:06:12.631 20:08:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.890 Malloc1 00:06:12.890 20:08:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.890 20:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.149 /dev/nbd0 00:06:13.149 20:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.149 20:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.149 1+0 records in 00:06:13.149 1+0 records out 00:06:13.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249453 s, 16.4 MB/s 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.149 20:08:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.149 20:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.149 20:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.149 20:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.149 /dev/nbd1 00:06:13.407 20:08:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.407 20:08:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.407 1+0 records in 00:06:13.407 1+0 records out 00:06:13.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157169 s, 26.1 MB/s 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.407 20:08:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.408 20:08:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.408 20:08:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.408 { 00:06:13.408 "nbd_device": "/dev/nbd0", 00:06:13.408 "bdev_name": "Malloc0" 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "nbd_device": "/dev/nbd1", 00:06:13.408 "bdev_name": "Malloc1" 00:06:13.408 } 00:06:13.408 ]' 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.408 { 00:06:13.408 "nbd_device": "/dev/nbd0", 00:06:13.408 "bdev_name": "Malloc0" 00:06:13.408 }, 00:06:13.408 { 00:06:13.408 "nbd_device": "/dev/nbd1", 00:06:13.408 "bdev_name": "Malloc1" 00:06:13.408 } 00:06:13.408 ]' 00:06:13.408 20:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.667 /dev/nbd1' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.667 /dev/nbd1' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.667 256+0 records in 00:06:13.667 256+0 records out 00:06:13.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106929 s, 98.1 MB/s 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.667 256+0 records in 00:06:13.667 256+0 records out 00:06:13.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019971 s, 52.5 MB/s 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.667 256+0 records in 00:06:13.667 256+0 records out 00:06:13.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213529 s, 49.1 MB/s 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.667 20:08:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.927 20:08:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.186 20:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.186 20:08:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.186 20:08:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.445 20:08:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.704 [2024-11-26 20:08:27.461647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.704 [2024-11-26 20:08:27.498388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.704 [2024-11-26 20:08:27.498390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.704 [2024-11-26 20:08:27.538620] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.704 [2024-11-26 20:08:27.538668] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.989 20:08:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1599115 /var/tmp/spdk-nbd.sock 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1599115 ']' 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.989 20:08:30 event.app_repeat -- event/event.sh@39 -- # killprocess 1599115 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1599115 ']' 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1599115 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1599115 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1599115' 00:06:17.989 killing process with pid 1599115 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1599115 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1599115 00:06:17.989 spdk_app_start is called in Round 0. 00:06:17.989 Shutdown signal received, stop current app iteration 00:06:17.989 Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 reinitialization... 00:06:17.989 spdk_app_start is called in Round 1. 00:06:17.989 Shutdown signal received, stop current app iteration 00:06:17.989 Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 reinitialization... 00:06:17.989 spdk_app_start is called in Round 2. 00:06:17.989 Shutdown signal received, stop current app iteration 00:06:17.989 Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 reinitialization... 00:06:17.989 spdk_app_start is called in Round 3. 00:06:17.989 Shutdown signal received, stop current app iteration 00:06:17.989 20:08:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.989 20:08:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.989 00:06:17.989 real 0m16.299s 00:06:17.989 user 0m34.912s 00:06:17.989 sys 0m3.313s 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.989 20:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.989 ************************************ 00:06:17.989 END TEST app_repeat 00:06:17.989 ************************************ 00:06:17.989 20:08:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.989 20:08:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.989 20:08:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.990 20:08:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.990 20:08:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.990 ************************************ 00:06:17.990 START TEST cpu_locks 00:06:17.990 ************************************ 00:06:17.990 20:08:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.990 * Looking for test storage... 00:06:17.990 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:17.990 20:08:30 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.990 20:08:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.990 20:08:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.249 20:08:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.249 20:08:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.249 20:08:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.249 20:08:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.249 20:08:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.249 20:08:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.249 20:08:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.250 20:08:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.250 --rc genhtml_branch_coverage=1 00:06:18.250 --rc genhtml_function_coverage=1 00:06:18.250 --rc genhtml_legend=1 00:06:18.250 --rc geninfo_all_blocks=1 00:06:18.250 --rc geninfo_unexecuted_blocks=1 00:06:18.250 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:18.250 ' 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.250 --rc genhtml_branch_coverage=1 00:06:18.250 --rc genhtml_function_coverage=1 00:06:18.250 --rc genhtml_legend=1 00:06:18.250 --rc geninfo_all_blocks=1 00:06:18.250 --rc geninfo_unexecuted_blocks=1 00:06:18.250 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:18.250 ' 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.250 --rc genhtml_branch_coverage=1 00:06:18.250 --rc genhtml_function_coverage=1 00:06:18.250 --rc genhtml_legend=1 00:06:18.250 --rc geninfo_all_blocks=1 00:06:18.250 --rc geninfo_unexecuted_blocks=1 00:06:18.250 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:18.250 ' 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.250 --rc genhtml_branch_coverage=1 00:06:18.250 --rc genhtml_function_coverage=1 00:06:18.250 --rc genhtml_legend=1 00:06:18.250 --rc geninfo_all_blocks=1 00:06:18.250 --rc geninfo_unexecuted_blocks=1 00:06:18.250 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:18.250 ' 00:06:18.250 20:08:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.250 20:08:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.250 20:08:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.250 20:08:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.250 20:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.250 ************************************ 00:06:18.250 START TEST default_locks 00:06:18.250 ************************************ 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1602131 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1602131 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1602131 ']' 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.250 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.250 [2024-11-26 20:08:31.040267] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:18.250 [2024-11-26 20:08:31.040325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602131 ] 00:06:18.250 [2024-11-26 20:08:31.110923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.250 [2024-11-26 20:08:31.154455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.509 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.509 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:18.509 20:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1602131 00:06:18.509 20:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1602131 00:06:18.509 20:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.076 lslocks: write error 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1602131 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1602131 ']' 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1602131 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1602131 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1602131' 00:06:19.076 killing process with pid 1602131 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1602131 00:06:19.076 20:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1602131 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1602131 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1602131 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1602131 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1602131 ']' 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.335 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1602131) - No such process 00:06:19.335 ERROR: process (pid: 1602131) is no longer running 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.335 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.336 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.336 20:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.336 20:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.336 20:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.336 20:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.336 00:06:19.336 real 0m1.237s 00:06:19.336 user 0m1.245s 00:06:19.336 sys 0m0.554s 00:06:19.336 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.336 20:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.336 ************************************ 00:06:19.336 END TEST default_locks 00:06:19.336 ************************************ 00:06:19.595 20:08:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.595 20:08:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.595 20:08:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.595 20:08:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.595 ************************************ 00:06:19.595 START TEST default_locks_via_rpc 00:06:19.595 ************************************ 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1602343 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1602343 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1602343 ']' 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.595 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.595 [2024-11-26 20:08:32.354033] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:19.595 [2024-11-26 20:08:32.354090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602343 ] 00:06:19.595 [2024-11-26 20:08:32.424195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.595 [2024-11-26 20:08:32.466783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1602343 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1602343 00:06:19.853 20:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1602343 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1602343 ']' 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1602343 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1602343 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1602343' 00:06:20.420 killing process with pid 1602343 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1602343 00:06:20.420 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1602343 00:06:20.987 00:06:20.987 real 0m1.300s 00:06:20.987 user 0m1.278s 00:06:20.987 sys 0m0.619s 00:06:20.987 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.987 20:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.987 ************************************ 00:06:20.987 END TEST default_locks_via_rpc 00:06:20.987 ************************************ 00:06:20.987 20:08:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.987 20:08:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.987 20:08:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.987 20:08:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.987 ************************************ 00:06:20.987 START TEST non_locking_app_on_locked_coremask 00:06:20.987 ************************************ 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1602622 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1602622 /var/tmp/spdk.sock 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1602622 ']' 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.987 20:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.987 [2024-11-26 20:08:33.739854] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:20.987 [2024-11-26 20:08:33.739923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602622 ] 00:06:20.987 [2024-11-26 20:08:33.811770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.987 [2024-11-26 20:08:33.854006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1602767 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1602767 /var/tmp/spdk2.sock 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1602767 ']' 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.248 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.248 [2024-11-26 20:08:34.086154] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:21.248 [2024-11-26 20:08:34.086223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602767 ] 00:06:21.506 [2024-11-26 20:08:34.183408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.506 [2024-11-26 20:08:34.183442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.506 [2024-11-26 20:08:34.262192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.073 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.073 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.073 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1602622 00:06:22.073 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1602622 00:06:22.073 20:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.448 lslocks: write error 00:06:23.448 20:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1602622 00:06:23.448 20:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1602622 ']' 00:06:23.448 20:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1602622 00:06:23.448 20:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.448 20:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.448 20:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1602622 00:06:23.448 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.448 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.448 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1602622' 00:06:23.448 killing process with pid 1602622 00:06:23.448 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1602622 00:06:23.448 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1602622 00:06:23.706 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1602767 00:06:23.706 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1602767 ']' 00:06:23.706 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1602767 00:06:23.706 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.706 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.706 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1602767 00:06:23.963 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.963 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.963 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1602767' 00:06:23.963 killing process with pid 1602767 00:06:23.963 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1602767 00:06:23.963 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1602767 00:06:24.221 00:06:24.221 real 0m3.238s 00:06:24.221 user 0m3.398s 00:06:24.221 sys 0m1.219s 00:06:24.221 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.221 20:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.221 ************************************ 00:06:24.221 END TEST non_locking_app_on_locked_coremask 00:06:24.221 ************************************ 00:06:24.221 20:08:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.221 20:08:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.221 20:08:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.221 20:08:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.221 ************************************ 00:06:24.221 START TEST locking_app_on_unlocked_coremask 00:06:24.221 ************************************ 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1603293 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1603293 /var/tmp/spdk.sock 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1603293 ']' 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.221 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.221 [2024-11-26 20:08:37.064078] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:24.221 [2024-11-26 20:08:37.064158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603293 ] 00:06:24.221 [2024-11-26 20:08:37.137029] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.221 [2024-11-26 20:08:37.137058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.479 [2024-11-26 20:08:37.178523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1603444 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1603444 /var/tmp/spdk2.sock 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1603444 ']' 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.479 20:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.737 [2024-11-26 20:08:37.411438] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:24.737 [2024-11-26 20:08:37.411527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603444 ] 00:06:24.737 [2024-11-26 20:08:37.508411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.737 [2024-11-26 20:08:37.588081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.671 20:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.671 20:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.671 20:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1603444 00:06:25.671 20:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1603444 00:06:25.671 20:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.603 lslocks: write error 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1603293 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1603293 ']' 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1603293 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1603293 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1603293' 00:06:26.603 killing process with pid 1603293 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1603293 00:06:26.603 20:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1603293 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1603444 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1603444 ']' 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1603444 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1603444 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1603444' 00:06:27.167 killing process with pid 1603444 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1603444 00:06:27.167 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1603444 00:06:27.780 00:06:27.780 real 0m3.337s 00:06:27.780 user 0m3.518s 00:06:27.780 sys 0m1.240s 00:06:27.780 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.780 20:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.780 ************************************ 00:06:27.780 END TEST locking_app_on_unlocked_coremask 00:06:27.780 ************************************ 00:06:27.780 20:08:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:27.780 20:08:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.780 20:08:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.781 20:08:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.781 ************************************ 00:06:27.781 START TEST locking_app_on_locked_coremask 00:06:27.781 ************************************ 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1604012 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1604012 /var/tmp/spdk.sock 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1604012 ']' 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.781 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.781 [2024-11-26 20:08:40.488715] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:27.781 [2024-11-26 20:08:40.488804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604012 ] 00:06:27.781 [2024-11-26 20:08:40.559911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.781 [2024-11-26 20:08:40.599305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.038 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.038 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.038 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1604021 00:06:28.038 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1604021 /var/tmp/spdk2.sock 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1604021 /var/tmp/spdk2.sock 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1604021 /var/tmp/spdk2.sock 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1604021 ']' 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.039 20:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.039 [2024-11-26 20:08:40.832433] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:28.039 [2024-11-26 20:08:40.832518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604021 ] 00:06:28.039 [2024-11-26 20:08:40.925830] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1604012 has claimed it. 00:06:28.039 [2024-11-26 20:08:40.925874] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.604 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1604021) - No such process 00:06:28.604 ERROR: process (pid: 1604021) is no longer running 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1604012 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1604012 00:06:28.604 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.168 lslocks: write error 00:06:29.169 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1604012 00:06:29.169 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1604012 ']' 00:06:29.169 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1604012 00:06:29.169 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.169 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.169 20:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604012 00:06:29.169 20:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.169 20:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.169 20:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604012' 00:06:29.169 killing process with pid 1604012 00:06:29.169 20:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1604012 00:06:29.169 20:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1604012 00:06:29.427 00:06:29.427 real 0m1.859s 00:06:29.427 user 0m1.964s 00:06:29.427 sys 0m0.669s 00:06:29.427 20:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.427 20:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.427 ************************************ 00:06:29.427 END TEST locking_app_on_locked_coremask 00:06:29.427 ************************************ 00:06:29.685 20:08:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.685 20:08:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.685 20:08:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.685 20:08:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.685 ************************************ 00:06:29.685 START TEST locking_overlapped_coremask 00:06:29.685 ************************************ 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1604315 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1604315 /var/tmp/spdk.sock 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1604315 ']' 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.685 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.685 [2024-11-26 20:08:42.409528] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:29.685 [2024-11-26 20:08:42.409573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604315 ] 00:06:29.685 [2024-11-26 20:08:42.478622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.685 [2024-11-26 20:08:42.525093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.685 [2024-11-26 20:08:42.525189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.685 [2024-11-26 20:08:42.525191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.943 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.943 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.943 20:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1604332 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1604332 /var/tmp/spdk2.sock 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1604332 /var/tmp/spdk2.sock 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1604332 /var/tmp/spdk2.sock 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1604332 ']' 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.944 20:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.944 [2024-11-26 20:08:42.768447] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:29.944 [2024-11-26 20:08:42.768510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604332 ] 00:06:29.944 [2024-11-26 20:08:42.868284] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1604315 has claimed it. 00:06:29.944 [2024-11-26 20:08:42.868322] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.509 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1604332) - No such process 00:06:30.509 ERROR: process (pid: 1604332) is no longer running 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1604315 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1604315 ']' 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1604315 00:06:30.509 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.767 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.767 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604315 00:06:30.767 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.767 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.767 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604315' 00:06:30.767 killing process with pid 1604315 00:06:30.767 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1604315 00:06:30.767 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1604315 00:06:31.025 00:06:31.025 real 0m1.400s 00:06:31.025 user 0m3.920s 00:06:31.025 sys 0m0.416s 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.025 ************************************ 00:06:31.025 END TEST locking_overlapped_coremask 00:06:31.025 ************************************ 00:06:31.025 20:08:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:31.025 20:08:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.025 20:08:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.025 20:08:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.025 ************************************ 00:06:31.025 START TEST locking_overlapped_coremask_via_rpc 00:06:31.025 ************************************ 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1604624 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1604624 /var/tmp/spdk.sock 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1604624 ']' 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.025 20:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.025 [2024-11-26 20:08:43.909755] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:31.025 [2024-11-26 20:08:43.909836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604624 ] 00:06:31.283 [2024-11-26 20:08:43.981639] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.283 [2024-11-26 20:08:43.981667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.283 [2024-11-26 20:08:44.022129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.283 [2024-11-26 20:08:44.022228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.283 [2024-11-26 20:08:44.022231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1604629 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1604629 /var/tmp/spdk2.sock 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1604629 ']' 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.542 20:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.542 [2024-11-26 20:08:44.266586] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:31.542 [2024-11-26 20:08:44.266655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604629 ] 00:06:31.542 [2024-11-26 20:08:44.367856] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.542 [2024-11-26 20:08:44.367891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.542 [2024-11-26 20:08:44.451015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.542 [2024-11-26 20:08:44.454648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.542 [2024-11-26 20:08:44.454648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.477 [2024-11-26 20:08:45.141667] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1604624 has claimed it. 00:06:32.477 request: 00:06:32.477 { 00:06:32.477 "method": "framework_enable_cpumask_locks", 00:06:32.477 "req_id": 1 00:06:32.477 } 00:06:32.477 Got JSON-RPC error response 00:06:32.477 response: 00:06:32.477 { 00:06:32.477 "code": -32603, 00:06:32.477 "message": "Failed to claim CPU core: 2" 00:06:32.477 } 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1604624 /var/tmp/spdk.sock 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1604624 ']' 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1604629 /var/tmp/spdk2.sock 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1604629 ']' 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.477 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.735 00:06:32.735 real 0m1.669s 00:06:32.735 user 0m0.790s 00:06:32.735 sys 0m0.158s 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.735 20:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.735 ************************************ 00:06:32.735 END TEST locking_overlapped_coremask_via_rpc 00:06:32.735 ************************************ 00:06:32.735 20:08:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.735 20:08:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1604624 ]] 00:06:32.735 20:08:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1604624 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1604624 ']' 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1604624 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604624 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604624' 00:06:32.735 killing process with pid 1604624 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1604624 00:06:32.735 20:08:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1604624 00:06:33.301 20:08:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1604629 ]] 00:06:33.301 20:08:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1604629 00:06:33.301 20:08:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1604629 ']' 00:06:33.301 20:08:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1604629 00:06:33.301 20:08:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:33.301 20:08:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.301 20:08:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604629 00:06:33.301 20:08:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:33.301 20:08:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:33.301 20:08:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604629' 00:06:33.301 killing process with pid 1604629 00:06:33.301 20:08:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1604629 00:06:33.301 20:08:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1604629 00:06:33.560 20:08:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.560 20:08:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.560 20:08:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1604624 ]] 00:06:33.560 20:08:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1604624 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1604624 ']' 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1604624 00:06:33.560 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1604624) - No such process 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1604624 is not found' 00:06:33.560 Process with pid 1604624 is not found 00:06:33.560 20:08:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1604629 ]] 00:06:33.560 20:08:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1604629 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1604629 ']' 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1604629 00:06:33.560 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1604629) - No such process 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1604629 is not found' 00:06:33.560 Process with pid 1604629 is not found 00:06:33.560 20:08:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.560 00:06:33.560 real 0m15.566s 00:06:33.560 user 0m25.963s 00:06:33.560 sys 0m5.983s 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.560 20:08:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.560 ************************************ 00:06:33.560 END TEST cpu_locks 00:06:33.560 ************************************ 00:06:33.560 00:06:33.560 real 0m40.243s 00:06:33.560 user 1m14.083s 00:06:33.560 sys 0m10.442s 00:06:33.560 20:08:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.560 20:08:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.560 ************************************ 00:06:33.560 END TEST event 00:06:33.560 ************************************ 00:06:33.560 20:08:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:33.560 20:08:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.560 20:08:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.560 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:06:33.560 ************************************ 00:06:33.560 START TEST thread 00:06:33.560 ************************************ 00:06:33.560 20:08:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:33.819 * Looking for test storage... 00:06:33.819 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.819 20:08:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.819 20:08:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.819 20:08:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.819 20:08:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.819 20:08:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.819 20:08:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.819 20:08:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.819 20:08:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.819 20:08:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.819 20:08:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.819 20:08:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.819 20:08:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:33.819 20:08:46 thread -- scripts/common.sh@345 -- # : 1 00:06:33.819 20:08:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.819 20:08:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.819 20:08:46 thread -- scripts/common.sh@365 -- # decimal 1 00:06:33.819 20:08:46 thread -- scripts/common.sh@353 -- # local d=1 00:06:33.819 20:08:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.819 20:08:46 thread -- scripts/common.sh@355 -- # echo 1 00:06:33.819 20:08:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.819 20:08:46 thread -- scripts/common.sh@366 -- # decimal 2 00:06:33.819 20:08:46 thread -- scripts/common.sh@353 -- # local d=2 00:06:33.819 20:08:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.819 20:08:46 thread -- scripts/common.sh@355 -- # echo 2 00:06:33.819 20:08:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.819 20:08:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.819 20:08:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.819 20:08:46 thread -- scripts/common.sh@368 -- # return 0 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.819 --rc genhtml_branch_coverage=1 00:06:33.819 --rc genhtml_function_coverage=1 00:06:33.819 --rc genhtml_legend=1 00:06:33.819 --rc geninfo_all_blocks=1 00:06:33.819 --rc geninfo_unexecuted_blocks=1 00:06:33.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.819 ' 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.819 --rc genhtml_branch_coverage=1 00:06:33.819 --rc genhtml_function_coverage=1 00:06:33.819 --rc genhtml_legend=1 00:06:33.819 --rc geninfo_all_blocks=1 00:06:33.819 --rc geninfo_unexecuted_blocks=1 00:06:33.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.819 ' 00:06:33.819 20:08:46 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.819 --rc genhtml_branch_coverage=1 00:06:33.819 --rc genhtml_function_coverage=1 00:06:33.819 --rc genhtml_legend=1 00:06:33.819 --rc geninfo_all_blocks=1 00:06:33.819 --rc geninfo_unexecuted_blocks=1 00:06:33.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.819 ' 00:06:33.820 20:08:46 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.820 --rc genhtml_branch_coverage=1 00:06:33.820 --rc genhtml_function_coverage=1 00:06:33.820 --rc genhtml_legend=1 00:06:33.820 --rc geninfo_all_blocks=1 00:06:33.820 --rc geninfo_unexecuted_blocks=1 00:06:33.820 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.820 ' 00:06:33.820 20:08:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.820 20:08:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:33.820 20:08:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.820 20:08:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.820 ************************************ 00:06:33.820 START TEST thread_poller_perf 00:06:33.820 ************************************ 00:06:33.820 20:08:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.820 [2024-11-26 20:08:46.724856] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:33.820 [2024-11-26 20:08:46.724954] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605257 ] 00:06:34.078 [2024-11-26 20:08:46.799158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.078 [2024-11-26 20:08:46.838355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.078 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:35.025 [2024-11-26T19:08:47.954Z] ====================================== 00:06:35.025 [2024-11-26T19:08:47.954Z] busy:2505443296 (cyc) 00:06:35.025 [2024-11-26T19:08:47.954Z] total_run_count: 822000 00:06:35.025 [2024-11-26T19:08:47.954Z] tsc_hz: 2500000000 (cyc) 00:06:35.025 [2024-11-26T19:08:47.954Z] ====================================== 00:06:35.025 [2024-11-26T19:08:47.954Z] poller_cost: 3047 (cyc), 1218 (nsec) 00:06:35.025 00:06:35.025 real 0m1.172s 00:06:35.025 user 0m1.090s 00:06:35.025 sys 0m0.078s 00:06:35.025 20:08:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.025 20:08:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.025 ************************************ 00:06:35.025 END TEST thread_poller_perf 00:06:35.025 ************************************ 00:06:35.025 20:08:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.025 20:08:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:35.025 20:08:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.025 20:08:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.307 ************************************ 00:06:35.307 START TEST thread_poller_perf 00:06:35.307 ************************************ 00:06:35.307 20:08:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.307 [2024-11-26 20:08:47.985539] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:35.307 [2024-11-26 20:08:47.985632] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605498 ] 00:06:35.307 [2024-11-26 20:08:48.061222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.307 [2024-11-26 20:08:48.102129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.307 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.258 [2024-11-26T19:08:49.187Z] ====================================== 00:06:36.258 [2024-11-26T19:08:49.187Z] busy:2501375300 (cyc) 00:06:36.258 [2024-11-26T19:08:49.187Z] total_run_count: 13168000 00:06:36.258 [2024-11-26T19:08:49.187Z] tsc_hz: 2500000000 (cyc) 00:06:36.258 [2024-11-26T19:08:49.187Z] ====================================== 00:06:36.258 [2024-11-26T19:08:49.187Z] poller_cost: 189 (cyc), 75 (nsec) 00:06:36.258 00:06:36.258 real 0m1.171s 00:06:36.258 user 0m1.082s 00:06:36.258 sys 0m0.085s 00:06:36.258 20:08:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.258 20:08:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.258 ************************************ 00:06:36.258 END TEST thread_poller_perf 00:06:36.258 ************************************ 00:06:36.258 20:08:49 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:36.258 20:08:49 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:36.258 20:08:49 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.258 20:08:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.258 20:08:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.515 ************************************ 00:06:36.515 START TEST thread_spdk_lock 00:06:36.515 ************************************ 00:06:36.515 20:08:49 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:36.515 [2024-11-26 20:08:49.241687] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:36.515 [2024-11-26 20:08:49.241769] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605639 ] 00:06:36.515 [2024-11-26 20:08:49.316429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.515 [2024-11-26 20:08:49.357699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.515 [2024-11-26 20:08:49.357702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.081 [2024-11-26 20:08:49.850428] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:37.081 [2024-11-26 20:08:49.850464] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:37.081 [2024-11-26 20:08:49.850473] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14dbbc0 00:06:37.081 [2024-11-26 20:08:49.851235] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:37.081 [2024-11-26 20:08:49.851340] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:37.081 [2024-11-26 20:08:49.851359] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:37.081 Starting test contend 00:06:37.081 Worker Delay Wait us Hold us Total us 00:06:37.081 0 3 169970 186182 356153 00:06:37.081 1 5 87624 286427 374052 00:06:37.081 PASS test contend 00:06:37.081 Starting test hold_by_poller 00:06:37.081 PASS test hold_by_poller 00:06:37.081 Starting test hold_by_message 00:06:37.081 PASS test hold_by_message 00:06:37.081 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:37.081 100014 assertions passed 00:06:37.081 0 assertions failed 00:06:37.081 00:06:37.081 real 0m0.661s 00:06:37.081 user 0m1.068s 00:06:37.081 sys 0m0.083s 00:06:37.081 20:08:49 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.081 20:08:49 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:37.081 ************************************ 00:06:37.081 END TEST thread_spdk_lock 00:06:37.081 ************************************ 00:06:37.081 00:06:37.081 real 0m3.456s 00:06:37.081 user 0m3.422s 00:06:37.081 sys 0m0.552s 00:06:37.081 20:08:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.081 20:08:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.081 ************************************ 00:06:37.081 END TEST thread 00:06:37.081 ************************************ 00:06:37.081 20:08:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:37.081 20:08:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:37.081 20:08:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.081 20:08:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.081 20:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:37.081 ************************************ 00:06:37.081 START TEST app_cmdline 00:06:37.081 ************************************ 00:06:37.081 20:08:50 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:37.340 * Looking for test storage... 00:06:37.340 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.340 20:08:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.340 --rc genhtml_branch_coverage=1 00:06:37.340 --rc genhtml_function_coverage=1 00:06:37.340 --rc genhtml_legend=1 00:06:37.340 --rc geninfo_all_blocks=1 00:06:37.340 --rc geninfo_unexecuted_blocks=1 00:06:37.340 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.340 ' 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.340 --rc genhtml_branch_coverage=1 00:06:37.340 --rc genhtml_function_coverage=1 00:06:37.340 --rc genhtml_legend=1 00:06:37.340 --rc geninfo_all_blocks=1 00:06:37.340 --rc geninfo_unexecuted_blocks=1 00:06:37.340 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.340 ' 00:06:37.340 20:08:50 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.340 --rc genhtml_branch_coverage=1 00:06:37.341 --rc genhtml_function_coverage=1 00:06:37.341 --rc genhtml_legend=1 00:06:37.341 --rc geninfo_all_blocks=1 00:06:37.341 --rc geninfo_unexecuted_blocks=1 00:06:37.341 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.341 ' 00:06:37.341 20:08:50 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.341 --rc genhtml_branch_coverage=1 00:06:37.341 --rc genhtml_function_coverage=1 00:06:37.341 --rc genhtml_legend=1 00:06:37.341 --rc geninfo_all_blocks=1 00:06:37.341 --rc geninfo_unexecuted_blocks=1 00:06:37.341 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.341 ' 00:06:37.341 20:08:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:37.341 20:08:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1605914 00:06:37.341 20:08:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1605914 00:06:37.341 20:08:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:37.341 20:08:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1605914 ']' 00:06:37.341 20:08:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.341 20:08:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.341 20:08:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.341 20:08:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.341 20:08:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.341 [2024-11-26 20:08:50.219860] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:37.341 [2024-11-26 20:08:50.219924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605914 ] 00:06:37.599 [2024-11-26 20:08:50.291835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.599 [2024-11-26 20:08:50.334755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:37.858 { 00:06:37.858 "version": "SPDK v25.01-pre git sha1 7cc16c961", 00:06:37.858 "fields": { 00:06:37.858 "major": 25, 00:06:37.858 "minor": 1, 00:06:37.858 "patch": 0, 00:06:37.858 "suffix": "-pre", 00:06:37.858 "commit": "7cc16c961" 00:06:37.858 } 00:06:37.858 } 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.858 20:08:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:37.858 20:08:50 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.117 request: 00:06:38.117 { 00:06:38.117 "method": "env_dpdk_get_mem_stats", 00:06:38.117 "req_id": 1 00:06:38.117 } 00:06:38.117 Got JSON-RPC error response 00:06:38.117 response: 00:06:38.117 { 00:06:38.117 "code": -32601, 00:06:38.117 "message": "Method not found" 00:06:38.117 } 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.117 20:08:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1605914 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1605914 ']' 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1605914 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.117 20:08:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605914 00:06:38.117 20:08:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.117 20:08:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.117 20:08:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605914' 00:06:38.117 killing process with pid 1605914 00:06:38.117 20:08:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 1605914 00:06:38.117 20:08:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 1605914 00:06:38.684 00:06:38.684 real 0m1.315s 00:06:38.684 user 0m1.486s 00:06:38.684 sys 0m0.494s 00:06:38.684 20:08:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.684 20:08:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.684 ************************************ 00:06:38.684 END TEST app_cmdline 00:06:38.684 ************************************ 00:06:38.684 20:08:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:38.684 20:08:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.684 20:08:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.684 20:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.684 ************************************ 00:06:38.684 START TEST version 00:06:38.684 ************************************ 00:06:38.684 20:08:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:38.684 * Looking for test storage... 00:06:38.684 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:38.684 20:08:51 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.684 20:08:51 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.684 20:08:51 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.684 20:08:51 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.684 20:08:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.684 20:08:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.684 20:08:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.684 20:08:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.684 20:08:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.684 20:08:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.684 20:08:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.684 20:08:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.684 20:08:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.684 20:08:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.684 20:08:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.684 20:08:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:38.684 20:08:51 version -- scripts/common.sh@345 -- # : 1 00:06:38.684 20:08:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.684 20:08:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.684 20:08:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:38.684 20:08:51 version -- scripts/common.sh@353 -- # local d=1 00:06:38.684 20:08:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.684 20:08:51 version -- scripts/common.sh@355 -- # echo 1 00:06:38.684 20:08:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.684 20:08:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:38.684 20:08:51 version -- scripts/common.sh@353 -- # local d=2 00:06:38.684 20:08:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.684 20:08:51 version -- scripts/common.sh@355 -- # echo 2 00:06:38.684 20:08:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.684 20:08:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.684 20:08:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.684 20:08:51 version -- scripts/common.sh@368 -- # return 0 00:06:38.684 20:08:51 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.684 20:08:51 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.684 --rc genhtml_branch_coverage=1 00:06:38.684 --rc genhtml_function_coverage=1 00:06:38.684 --rc genhtml_legend=1 00:06:38.684 --rc geninfo_all_blocks=1 00:06:38.685 --rc geninfo_unexecuted_blocks=1 00:06:38.685 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.685 ' 00:06:38.685 20:08:51 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.685 --rc genhtml_branch_coverage=1 00:06:38.685 --rc genhtml_function_coverage=1 00:06:38.685 --rc genhtml_legend=1 00:06:38.685 --rc geninfo_all_blocks=1 00:06:38.685 --rc geninfo_unexecuted_blocks=1 00:06:38.685 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.685 ' 00:06:38.685 20:08:51 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.685 --rc genhtml_branch_coverage=1 00:06:38.685 --rc genhtml_function_coverage=1 00:06:38.685 --rc genhtml_legend=1 00:06:38.685 --rc geninfo_all_blocks=1 00:06:38.685 --rc geninfo_unexecuted_blocks=1 00:06:38.685 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.685 ' 00:06:38.685 20:08:51 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.685 --rc genhtml_branch_coverage=1 00:06:38.685 --rc genhtml_function_coverage=1 00:06:38.685 --rc genhtml_legend=1 00:06:38.685 --rc geninfo_all_blocks=1 00:06:38.685 --rc geninfo_unexecuted_blocks=1 00:06:38.685 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.685 ' 00:06:38.685 20:08:51 version -- app/version.sh@17 -- # get_header_version major 00:06:38.685 20:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # cut -f2 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.685 20:08:51 version -- app/version.sh@17 -- # major=25 00:06:38.685 20:08:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:38.685 20:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # cut -f2 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.685 20:08:51 version -- app/version.sh@18 -- # minor=1 00:06:38.685 20:08:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:38.685 20:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # cut -f2 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.685 20:08:51 version -- app/version.sh@19 -- # patch=0 00:06:38.685 20:08:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:38.685 20:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # cut -f2 00:06:38.685 20:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.685 20:08:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:38.685 20:08:51 version -- app/version.sh@22 -- # version=25.1 00:06:38.685 20:08:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.685 20:08:51 version -- app/version.sh@28 -- # version=25.1rc0 00:06:38.943 20:08:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:38.943 20:08:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.943 20:08:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:38.943 20:08:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:38.943 00:06:38.943 real 0m0.242s 00:06:38.943 user 0m0.127s 00:06:38.943 sys 0m0.165s 00:06:38.943 20:08:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.943 20:08:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:38.943 ************************************ 00:06:38.943 END TEST version 00:06:38.943 ************************************ 00:06:38.943 20:08:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:38.943 20:08:51 -- spdk/autotest.sh@194 -- # uname -s 00:06:38.943 20:08:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:38.943 20:08:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.943 20:08:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.943 20:08:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:38.943 20:08:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.943 20:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.943 20:08:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:38.943 20:08:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:38.944 20:08:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:38.944 20:08:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:38.944 20:08:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:06:38.944 20:08:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:38.944 20:08:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:06:38.944 20:08:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:38.944 20:08:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:06:38.944 20:08:51 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:06:38.944 20:08:51 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:38.944 20:08:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.944 20:08:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.944 20:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 ************************************ 00:06:38.944 START TEST llvm_fuzz 00:06:38.944 ************************************ 00:06:38.944 20:08:51 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:38.944 * Looking for test storage... 00:06:38.944 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:38.944 20:08:51 llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.202 20:08:51 llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.203 20:08:51 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.203 --rc genhtml_branch_coverage=1 00:06:39.203 --rc genhtml_function_coverage=1 00:06:39.203 --rc genhtml_legend=1 00:06:39.203 --rc geninfo_all_blocks=1 00:06:39.203 --rc geninfo_unexecuted_blocks=1 00:06:39.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.203 ' 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.203 --rc genhtml_branch_coverage=1 00:06:39.203 --rc genhtml_function_coverage=1 00:06:39.203 --rc genhtml_legend=1 00:06:39.203 --rc geninfo_all_blocks=1 00:06:39.203 --rc geninfo_unexecuted_blocks=1 00:06:39.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.203 ' 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.203 --rc genhtml_branch_coverage=1 00:06:39.203 --rc genhtml_function_coverage=1 00:06:39.203 --rc genhtml_legend=1 00:06:39.203 --rc geninfo_all_blocks=1 00:06:39.203 --rc geninfo_unexecuted_blocks=1 00:06:39.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.203 ' 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.203 --rc genhtml_branch_coverage=1 00:06:39.203 --rc genhtml_function_coverage=1 00:06:39.203 --rc genhtml_legend=1 00:06:39.203 --rc geninfo_all_blocks=1 00:06:39.203 --rc geninfo_unexecuted_blocks=1 00:06:39.203 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.203 ' 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:39.203 20:08:51 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.203 20:08:51 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:39.203 ************************************ 00:06:39.203 START TEST nvmf_llvm_fuzz 00:06:39.203 ************************************ 00:06:39.203 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:39.203 * Looking for test storage... 00:06:39.203 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.203 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.203 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.203 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.465 --rc genhtml_branch_coverage=1 00:06:39.465 --rc genhtml_function_coverage=1 00:06:39.465 --rc genhtml_legend=1 00:06:39.465 --rc geninfo_all_blocks=1 00:06:39.465 --rc geninfo_unexecuted_blocks=1 00:06:39.465 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.465 ' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.465 --rc genhtml_branch_coverage=1 00:06:39.465 --rc genhtml_function_coverage=1 00:06:39.465 --rc genhtml_legend=1 00:06:39.465 --rc geninfo_all_blocks=1 00:06:39.465 --rc geninfo_unexecuted_blocks=1 00:06:39.465 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.465 ' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.465 --rc genhtml_branch_coverage=1 00:06:39.465 --rc genhtml_function_coverage=1 00:06:39.465 --rc genhtml_legend=1 00:06:39.465 --rc geninfo_all_blocks=1 00:06:39.465 --rc geninfo_unexecuted_blocks=1 00:06:39.465 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.465 ' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.465 --rc genhtml_branch_coverage=1 00:06:39.465 --rc genhtml_function_coverage=1 00:06:39.465 --rc genhtml_legend=1 00:06:39.465 --rc geninfo_all_blocks=1 00:06:39.465 --rc geninfo_unexecuted_blocks=1 00:06:39.465 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.465 ' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:39.465 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:39.466 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:39.466 #define SPDK_CONFIG_H 00:06:39.466 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:39.466 #define SPDK_CONFIG_APPS 1 00:06:39.466 #define SPDK_CONFIG_ARCH native 00:06:39.466 #undef SPDK_CONFIG_ASAN 00:06:39.466 #undef SPDK_CONFIG_AVAHI 00:06:39.466 #undef SPDK_CONFIG_CET 00:06:39.466 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:39.466 #define SPDK_CONFIG_COVERAGE 1 00:06:39.466 #define SPDK_CONFIG_CROSS_PREFIX 00:06:39.466 #undef SPDK_CONFIG_CRYPTO 00:06:39.466 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:39.466 #undef SPDK_CONFIG_CUSTOMOCF 00:06:39.466 #undef SPDK_CONFIG_DAOS 00:06:39.466 #define SPDK_CONFIG_DAOS_DIR 00:06:39.466 #define SPDK_CONFIG_DEBUG 1 00:06:39.466 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:39.466 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:39.466 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:39.466 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:39.466 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:39.466 #undef SPDK_CONFIG_DPDK_UADK 00:06:39.466 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:39.466 #define SPDK_CONFIG_EXAMPLES 1 00:06:39.466 #undef SPDK_CONFIG_FC 00:06:39.466 #define SPDK_CONFIG_FC_PATH 00:06:39.466 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:39.466 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:39.466 #define SPDK_CONFIG_FSDEV 1 00:06:39.466 #undef SPDK_CONFIG_FUSE 00:06:39.466 #define SPDK_CONFIG_FUZZER 1 00:06:39.466 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:39.466 #undef SPDK_CONFIG_GOLANG 00:06:39.466 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:39.466 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:39.466 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:39.466 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:39.466 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:39.466 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:39.466 #undef SPDK_CONFIG_HAVE_LZ4 00:06:39.466 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:39.466 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:39.466 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:39.466 #define SPDK_CONFIG_IDXD 1 00:06:39.466 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:39.466 #undef SPDK_CONFIG_IPSEC_MB 00:06:39.466 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:39.466 #define SPDK_CONFIG_ISAL 1 00:06:39.466 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:39.466 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:39.466 #define SPDK_CONFIG_LIBDIR 00:06:39.466 #undef SPDK_CONFIG_LTO 00:06:39.466 #define SPDK_CONFIG_MAX_LCORES 128 00:06:39.467 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:39.467 #define SPDK_CONFIG_NVME_CUSE 1 00:06:39.467 #undef SPDK_CONFIG_OCF 00:06:39.467 #define SPDK_CONFIG_OCF_PATH 00:06:39.467 #define SPDK_CONFIG_OPENSSL_PATH 00:06:39.467 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:39.467 #define SPDK_CONFIG_PGO_DIR 00:06:39.467 #undef SPDK_CONFIG_PGO_USE 00:06:39.467 #define SPDK_CONFIG_PREFIX /usr/local 00:06:39.467 #undef SPDK_CONFIG_RAID5F 00:06:39.467 #undef SPDK_CONFIG_RBD 00:06:39.467 #define SPDK_CONFIG_RDMA 1 00:06:39.467 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:39.467 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:39.467 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:39.467 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:39.467 #undef SPDK_CONFIG_SHARED 00:06:39.467 #undef SPDK_CONFIG_SMA 00:06:39.467 #define SPDK_CONFIG_TESTS 1 00:06:39.467 #undef SPDK_CONFIG_TSAN 00:06:39.467 #define SPDK_CONFIG_UBLK 1 00:06:39.467 #define SPDK_CONFIG_UBSAN 1 00:06:39.467 #undef SPDK_CONFIG_UNIT_TESTS 00:06:39.467 #undef SPDK_CONFIG_URING 00:06:39.467 #define SPDK_CONFIG_URING_PATH 00:06:39.467 #undef SPDK_CONFIG_URING_ZNS 00:06:39.467 #undef SPDK_CONFIG_USDT 00:06:39.467 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:39.467 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:39.467 #define SPDK_CONFIG_VFIO_USER 1 00:06:39.467 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:39.467 #define SPDK_CONFIG_VHOST 1 00:06:39.467 #define SPDK_CONFIG_VIRTIO 1 00:06:39.467 #undef SPDK_CONFIG_VTUNE 00:06:39.467 #define SPDK_CONFIG_VTUNE_DIR 00:06:39.467 #define SPDK_CONFIG_WERROR 1 00:06:39.467 #define SPDK_CONFIG_WPDK_DIR 00:06:39.467 #undef SPDK_CONFIG_XNVME 00:06:39.467 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:39.467 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:39.468 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1606357 ]] 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1606357 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.6nfxGD 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.6nfxGD/tests/nvmf /tmp/spdk.6nfxGD 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=52902326272 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730607104 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8828280832 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30860537856 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340129792 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30863769600 00:06:39.469 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865305600 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=1536000 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:39.470 * Looking for test storage... 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=52902326272 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=11042873344 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.470 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.470 --rc genhtml_branch_coverage=1 00:06:39.470 --rc genhtml_function_coverage=1 00:06:39.470 --rc genhtml_legend=1 00:06:39.470 --rc geninfo_all_blocks=1 00:06:39.470 --rc geninfo_unexecuted_blocks=1 00:06:39.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.470 ' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.470 --rc genhtml_branch_coverage=1 00:06:39.470 --rc genhtml_function_coverage=1 00:06:39.470 --rc genhtml_legend=1 00:06:39.470 --rc geninfo_all_blocks=1 00:06:39.470 --rc geninfo_unexecuted_blocks=1 00:06:39.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.470 ' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.470 --rc genhtml_branch_coverage=1 00:06:39.470 --rc genhtml_function_coverage=1 00:06:39.470 --rc genhtml_legend=1 00:06:39.470 --rc geninfo_all_blocks=1 00:06:39.470 --rc geninfo_unexecuted_blocks=1 00:06:39.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.470 ' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.470 --rc genhtml_branch_coverage=1 00:06:39.470 --rc genhtml_function_coverage=1 00:06:39.470 --rc genhtml_legend=1 00:06:39.470 --rc geninfo_all_blocks=1 00:06:39.470 --rc geninfo_unexecuted_blocks=1 00:06:39.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.470 ' 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:39.470 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.729 20:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:39.729 [2024-11-26 20:08:52.431172] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:39.730 [2024-11-26 20:08:52.431241] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606515 ] 00:06:39.730 [2024-11-26 20:08:52.627184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.988 [2024-11-26 20:08:52.661122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.988 [2024-11-26 20:08:52.719946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.988 [2024-11-26 20:08:52.736315] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:39.988 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.988 INFO: Seed: 4186124857 00:06:39.988 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:39.988 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:39.988 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:39.988 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.988 #2 INITED exec/s: 0 rss: 65Mb 00:06:39.988 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.988 This may also happen if the target rejected all inputs we tried so far 00:06:39.988 [2024-11-26 20:08:52.791612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:39.988 [2024-11-26 20:08:52.791640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.247 NEW_FUNC[1/716]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:40.247 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:40.247 #5 NEW cov: 12226 ft: 12225 corp: 2/95b lim: 320 exec/s: 0 rss: 73Mb L: 94/94 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:06:40.247 [2024-11-26 20:08:53.122458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.247 [2024-11-26 20:08:53.122497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.247 #6 NEW cov: 12339 ft: 12615 corp: 3/189b lim: 320 exec/s: 0 rss: 73Mb L: 94/94 MS: 1 ChangeByte- 00:06:40.506 [2024-11-26 20:08:53.182624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.506 [2024-11-26 20:08:53.182649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.506 [2024-11-26 20:08:53.182715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.506 [2024-11-26 20:08:53.182729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.506 #9 NEW cov: 12348 ft: 13156 corp: 4/368b lim: 320 exec/s: 0 rss: 73Mb L: 179/179 MS: 3 EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:06:40.506 [2024-11-26 20:08:53.222556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.506 [2024-11-26 20:08:53.222581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.506 #10 NEW cov: 12433 ft: 13405 corp: 5/462b lim: 320 exec/s: 0 rss: 73Mb L: 94/179 MS: 1 ShuffleBytes- 00:06:40.506 [2024-11-26 20:08:53.262712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.506 [2024-11-26 20:08:53.262737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.506 #11 NEW cov: 12433 ft: 13446 corp: 6/556b lim: 320 exec/s: 0 rss: 73Mb L: 94/179 MS: 1 ChangeBit- 00:06:40.506 [2024-11-26 20:08:53.302802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.506 [2024-11-26 20:08:53.302827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.506 #12 NEW cov: 12433 ft: 13590 corp: 7/651b lim: 320 exec/s: 0 rss: 73Mb L: 95/179 MS: 1 CrossOver- 00:06:40.506 [2024-11-26 20:08:53.363088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.506 [2024-11-26 20:08:53.363113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.506 [2024-11-26 20:08:53.363171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:5 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7afa 00:06:40.506 [2024-11-26 20:08:53.363185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.506 #13 NEW cov: 12433 ft: 13760 corp: 8/839b lim: 320 exec/s: 0 rss: 73Mb L: 188/188 MS: 1 CrossOver- 00:06:40.506 [2024-11-26 20:08:53.403230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:d1d1d1d1 cdw11:d1d1d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd1d1d1d1d1d1d1d1 00:06:40.506 [2024-11-26 20:08:53.403254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.506 [2024-11-26 20:08:53.403313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d1) qid:0 cid:5 nsid:d1d1d1d1 cdw10:7a7a7a7a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.506 [2024-11-26 20:08:53.403327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.506 NEW_FUNC[1/1]: 0x19757d8 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:40.506 #14 NEW cov: 12446 ft: 14109 corp: 9/995b lim: 320 exec/s: 0 rss: 73Mb L: 156/188 MS: 1 InsertRepeatedBytes- 00:06:40.765 [2024-11-26 20:08:53.443199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.765 [2024-11-26 20:08:53.443224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.765 #15 NEW cov: 12446 ft: 14139 corp: 10/1089b lim: 320 exec/s: 0 rss: 73Mb L: 94/188 MS: 1 ChangeBinInt- 00:06:40.765 [2024-11-26 20:08:53.483286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.765 [2024-11-26 20:08:53.483310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.765 #16 NEW cov: 12446 ft: 14168 corp: 11/1185b lim: 320 exec/s: 0 rss: 73Mb L: 96/188 MS: 1 InsertByte- 00:06:40.765 [2024-11-26 20:08:53.543606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.765 [2024-11-26 20:08:53.543632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.765 [2024-11-26 20:08:53.543685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.765 [2024-11-26 20:08:53.543699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.765 #22 NEW cov: 12446 ft: 14210 corp: 12/1364b lim: 320 exec/s: 0 rss: 74Mb L: 179/188 MS: 1 ChangeByte- 00:06:40.765 [2024-11-26 20:08:53.603767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.765 [2024-11-26 20:08:53.603792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.765 [2024-11-26 20:08:53.603846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:5 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7afa 00:06:40.765 [2024-11-26 20:08:53.603861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.765 #23 NEW cov: 12446 ft: 14276 corp: 13/1552b lim: 320 exec/s: 0 rss: 74Mb L: 188/188 MS: 1 ChangeByte- 00:06:40.765 [2024-11-26 20:08:53.663804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:40.765 [2024-11-26 20:08:53.663829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.024 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:41.024 #24 NEW cov: 12469 ft: 14314 corp: 14/1648b lim: 320 exec/s: 0 rss: 74Mb L: 96/188 MS: 1 ChangeBit- 00:06:41.024 [2024-11-26 20:08:53.723969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a327a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.024 [2024-11-26 20:08:53.723995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.024 #25 NEW cov: 12469 ft: 14325 corp: 15/1743b lim: 320 exec/s: 0 rss: 74Mb L: 95/188 MS: 1 InsertByte- 00:06:41.024 [2024-11-26 20:08:53.784139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.024 [2024-11-26 20:08:53.784165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.024 #26 NEW cov: 12469 ft: 14335 corp: 16/1838b lim: 320 exec/s: 26 rss: 74Mb L: 95/188 MS: 1 ChangeByte- 00:06:41.024 [2024-11-26 20:08:53.824516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.024 [2024-11-26 20:08:53.824541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.024 [2024-11-26 20:08:53.824604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:5 nsid:7a7a7a7a cdw10:07070707 cdw11:07070707 SGL TRANSPORT DATA BLOCK TRANSPORT 0x70707077a7a7afa 00:06:41.024 [2024-11-26 20:08:53.824617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.024 [2024-11-26 20:08:53.824673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (07) qid:0 cid:6 nsid:7070707 cdw10:07070707 cdw11:07070707 SGL TRANSPORT DATA BLOCK TRANSPORT 0x707070707070707 00:06:41.024 [2024-11-26 20:08:53.824690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.024 [2024-11-26 20:08:53.824747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (07) qid:0 cid:7 nsid:7070707 cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.024 [2024-11-26 20:08:53.824760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.024 NEW_FUNC[1/1]: 0x153ee58 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:06:41.024 #27 NEW cov: 12500 ft: 14586 corp: 17/2148b lim: 320 exec/s: 27 rss: 74Mb L: 310/310 MS: 1 InsertRepeatedBytes- 00:06:41.024 [2024-11-26 20:08:53.894665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.024 [2024-11-26 20:08:53.894690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.024 [2024-11-26 20:08:53.894744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e8) qid:0 cid:5 nsid:e8e8e8e8 cdw10:e8e8e8e8 cdw11:e8e8e8e8 00:06:41.024 [2024-11-26 20:08:53.894758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.024 [2024-11-26 20:08:53.894814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e8) qid:0 cid:6 nsid:e8e8e8e8 cdw10:7a7a0a7a cdw11:7a7a7a7a 00:06:41.024 [2024-11-26 20:08:53.894827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.024 #28 NEW cov: 12500 ft: 14737 corp: 18/2345b lim: 320 exec/s: 28 rss: 74Mb L: 197/310 MS: 1 InsertRepeatedBytes- 00:06:41.024 [2024-11-26 20:08:53.934523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.025 [2024-11-26 20:08:53.934549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.283 #29 NEW cov: 12500 ft: 14866 corp: 19/2441b lim: 320 exec/s: 29 rss: 74Mb L: 96/310 MS: 1 ShuffleBytes- 00:06:41.283 [2024-11-26 20:08:53.994726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.283 [2024-11-26 20:08:53.994760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.283 #30 NEW cov: 12500 ft: 14881 corp: 20/2537b lim: 320 exec/s: 30 rss: 74Mb L: 96/310 MS: 1 ChangeBit- 00:06:41.283 [2024-11-26 20:08:54.054875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7a327a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.284 [2024-11-26 20:08:54.054901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.284 #31 NEW cov: 12517 ft: 14949 corp: 21/2633b lim: 320 exec/s: 31 rss: 74Mb L: 96/310 MS: 1 CrossOver- 00:06:41.284 [2024-11-26 20:08:54.095051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.284 [2024-11-26 20:08:54.095076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.284 #32 NEW cov: 12517 ft: 14960 corp: 22/2729b lim: 320 exec/s: 32 rss: 74Mb L: 96/310 MS: 1 ChangeByte- 00:06:41.284 [2024-11-26 20:08:54.155152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:7a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.284 [2024-11-26 20:08:54.155180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.284 #33 NEW cov: 12517 ft: 14971 corp: 23/2828b lim: 320 exec/s: 33 rss: 74Mb L: 99/310 MS: 1 EraseBytes- 00:06:41.284 [2024-11-26 20:08:54.195379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.284 [2024-11-26 20:08:54.195404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.284 [2024-11-26 20:08:54.195470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.284 [2024-11-26 20:08:54.195484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.542 #34 NEW cov: 12517 ft: 14984 corp: 24/3007b lim: 320 exec/s: 34 rss: 74Mb L: 179/310 MS: 1 ChangeByte- 00:06:41.542 [2024-11-26 20:08:54.235403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.542 [2024-11-26 20:08:54.235427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.542 #35 NEW cov: 12517 ft: 14992 corp: 25/3131b lim: 320 exec/s: 35 rss: 74Mb L: 124/310 MS: 1 EraseBytes- 00:06:41.542 [2024-11-26 20:08:54.275521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.542 [2024-11-26 20:08:54.275546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.542 #36 NEW cov: 12517 ft: 15008 corp: 26/3226b lim: 320 exec/s: 36 rss: 74Mb L: 95/310 MS: 1 InsertByte- 00:06:41.542 [2024-11-26 20:08:54.315742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.542 [2024-11-26 20:08:54.315766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.543 [2024-11-26 20:08:54.315826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:5 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a0a307a7a 00:06:41.543 [2024-11-26 20:08:54.315840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.543 #37 NEW cov: 12517 ft: 15022 corp: 27/3385b lim: 320 exec/s: 37 rss: 74Mb L: 159/310 MS: 1 CopyPart- 00:06:41.543 [2024-11-26 20:08:54.375936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.543 [2024-11-26 20:08:54.375960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.543 [2024-11-26 20:08:54.376021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:5 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7afa7a 00:06:41.543 [2024-11-26 20:08:54.376035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.543 #38 NEW cov: 12517 ft: 15103 corp: 28/3573b lim: 320 exec/s: 38 rss: 74Mb L: 188/310 MS: 1 CopyPart- 00:06:41.543 [2024-11-26 20:08:54.415931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.543 [2024-11-26 20:08:54.415956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.543 #39 NEW cov: 12517 ft: 15109 corp: 29/3669b lim: 320 exec/s: 39 rss: 74Mb L: 96/310 MS: 1 ChangeBit- 00:06:41.543 [2024-11-26 20:08:54.456164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.543 [2024-11-26 20:08:54.456188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.543 [2024-11-26 20:08:54.456250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:5 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7afa7a 00:06:41.543 [2024-11-26 20:08:54.456264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.801 #40 NEW cov: 12517 ft: 15126 corp: 30/3857b lim: 320 exec/s: 40 rss: 75Mb L: 188/310 MS: 1 CMP- DE: "\000\221\332\312#`- "- 00:06:41.801 [2024-11-26 20:08:54.516204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.801 [2024-11-26 20:08:54.516228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.801 #41 NEW cov: 12517 ft: 15165 corp: 31/3959b lim: 320 exec/s: 41 rss: 75Mb L: 102/310 MS: 1 PersAutoDict- DE: "\000\221\332\312#`- "- 00:06:41.801 [2024-11-26 20:08:54.556298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.801 [2024-11-26 20:08:54.556323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.801 #42 NEW cov: 12517 ft: 15208 corp: 32/4055b lim: 320 exec/s: 42 rss: 75Mb L: 96/310 MS: 1 ChangeBinInt- 00:06:41.801 [2024-11-26 20:08:54.616488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.801 [2024-11-26 20:08:54.616512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.801 #43 NEW cov: 12517 ft: 15225 corp: 33/4151b lim: 320 exec/s: 43 rss: 75Mb L: 96/310 MS: 1 ChangeBit- 00:06:41.801 [2024-11-26 20:08:54.656641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:ffff307a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.802 [2024-11-26 20:08:54.656666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.802 #44 NEW cov: 12517 ft: 15234 corp: 34/4247b lim: 320 exec/s: 44 rss: 75Mb L: 96/310 MS: 1 CMP- DE: "\377\377\377\024"- 00:06:41.802 [2024-11-26 20:08:54.696704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7aa07a7a cdw10:7a327a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:41.802 [2024-11-26 20:08:54.696728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.061 #45 NEW cov: 12517 ft: 15245 corp: 35/4343b lim: 320 exec/s: 45 rss: 75Mb L: 96/310 MS: 1 InsertByte- 00:06:42.061 [2024-11-26 20:08:54.756893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:7a7a7a7a cdw10:7a7a7a7a cdw11:7a7a7a7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x7a7a7a7a7a7a7a7a 00:06:42.061 [2024-11-26 20:08:54.756917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.061 #46 NEW cov: 12517 ft: 15256 corp: 36/4437b lim: 320 exec/s: 23 rss: 75Mb L: 94/310 MS: 1 ChangeBit- 00:06:42.061 #46 DONE cov: 12517 ft: 15256 corp: 36/4437b lim: 320 exec/s: 23 rss: 75Mb 00:06:42.061 ###### Recommended dictionary. ###### 00:06:42.061 "\000\221\332\312#`- " # Uses: 1 00:06:42.061 "\377\377\377\024" # Uses: 0 00:06:42.061 ###### End of recommended dictionary. ###### 00:06:42.061 Done 46 runs in 2 second(s) 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.061 20:08:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:42.061 [2024-11-26 20:08:54.917295] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:42.061 [2024-11-26 20:08:54.917366] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606947 ] 00:06:42.320 [2024-11-26 20:08:55.099484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.320 [2024-11-26 20:08:55.133257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.320 [2024-11-26 20:08:55.192184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.320 [2024-11-26 20:08:55.208549] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:42.320 INFO: Running with entropic power schedule (0xFF, 100). 00:06:42.320 INFO: Seed: 2363160459 00:06:42.320 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:42.320 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:42.320 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:42.320 INFO: A corpus is not provided, starting from an empty corpus 00:06:42.320 #2 INITED exec/s: 0 rss: 65Mb 00:06:42.320 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:42.320 This may also happen if the target rejected all inputs we tried so far 00:06:42.579 [2024-11-26 20:08:55.253768] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:42.579 [2024-11-26 20:08:55.254126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.579 [2024-11-26 20:08:55.254156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.579 [2024-11-26 20:08:55.254219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.579 [2024-11-26 20:08:55.254234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.838 NEW_FUNC[1/717]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:42.838 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.838 #28 NEW cov: 12338 ft: 12336 corp: 2/16b lim: 30 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:06:42.838 [2024-11-26 20:08:55.564675] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x31 00:06:42.838 [2024-11-26 20:08:55.564934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.564965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.838 [2024-11-26 20:08:55.565023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.565039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.838 #30 NEW cov: 12457 ft: 12959 corp: 3/29b lim: 30 exec/s: 0 rss: 73Mb L: 13/15 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:42.838 [2024-11-26 20:08:55.604673] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x31 00:06:42.838 [2024-11-26 20:08:55.604912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.604938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.838 [2024-11-26 20:08:55.604997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.605011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.838 #31 NEW cov: 12463 ft: 13266 corp: 4/42b lim: 30 exec/s: 0 rss: 73Mb L: 13/15 MS: 1 ShuffleBytes- 00:06:42.838 [2024-11-26 20:08:55.664753] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:42.838 [2024-11-26 20:08:55.665188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.665214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.838 [2024-11-26 20:08:55.665271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.665286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.838 [2024-11-26 20:08:55.665342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.665356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.838 #32 NEW cov: 12548 ft: 13801 corp: 5/62b lim: 30 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:06:42.838 [2024-11-26 20:08:55.724907] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:42.838 [2024-11-26 20:08:55.725356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.838 [2024-11-26 20:08:55.725388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.838 [2024-11-26 20:08:55.725448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.839 [2024-11-26 20:08:55.725463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.839 [2024-11-26 20:08:55.725519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.839 [2024-11-26 20:08:55.725532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.839 #33 NEW cov: 12548 ft: 13911 corp: 6/85b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CopyPart- 00:06:43.098 [2024-11-26 20:08:55.785094] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.098 [2024-11-26 20:08:55.785531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.785557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.785612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.785627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.785683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.785697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.098 #35 NEW cov: 12548 ft: 14013 corp: 7/107b lim: 30 exec/s: 0 rss: 73Mb L: 22/23 MS: 2 CopyPart-CrossOver- 00:06:43.098 [2024-11-26 20:08:55.825184] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.098 [2024-11-26 20:08:55.825634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.825661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.825720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.825736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.825793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.825807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.098 #36 NEW cov: 12548 ft: 14137 corp: 8/129b lim: 30 exec/s: 0 rss: 73Mb L: 22/23 MS: 1 ChangeBinInt- 00:06:43.098 [2024-11-26 20:08:55.885291] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.098 [2024-11-26 20:08:55.885621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.885646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.885707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.885725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.098 #37 NEW cov: 12548 ft: 14162 corp: 9/144b lim: 30 exec/s: 0 rss: 73Mb L: 15/23 MS: 1 ShuffleBytes- 00:06:43.098 [2024-11-26 20:08:55.925468] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.098 [2024-11-26 20:08:55.926007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.926032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.926090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.926104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.926161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.926175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.926228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.926242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.098 #38 NEW cov: 12548 ft: 14804 corp: 10/169b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:06:43.098 [2024-11-26 20:08:55.965580] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.098 [2024-11-26 20:08:55.966020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.966045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.966106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.966121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:55.966177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:55.966191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.098 #39 NEW cov: 12548 ft: 14888 corp: 11/191b lim: 30 exec/s: 0 rss: 73Mb L: 22/25 MS: 1 ShuffleBytes- 00:06:43.098 [2024-11-26 20:08:56.005666] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.098 [2024-11-26 20:08:56.006001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:56.006026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.098 [2024-11-26 20:08:56.006086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.098 [2024-11-26 20:08:56.006100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.357 #40 NEW cov: 12548 ft: 14926 corp: 12/204b lim: 30 exec/s: 0 rss: 73Mb L: 13/25 MS: 1 EraseBytes- 00:06:43.357 [2024-11-26 20:08:56.065818] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.357 [2024-11-26 20:08:56.066152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.066177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.357 [2024-11-26 20:08:56.066235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.066250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.357 #41 NEW cov: 12548 ft: 14984 corp: 13/220b lim: 30 exec/s: 0 rss: 73Mb L: 16/25 MS: 1 EraseBytes- 00:06:43.357 [2024-11-26 20:08:56.106034] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.357 [2024-11-26 20:08:56.106602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.106628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.357 [2024-11-26 20:08:56.106690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.106705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.357 [2024-11-26 20:08:56.106765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.106779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.357 [2024-11-26 20:08:56.106835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.106848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.357 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:43.357 #42 NEW cov: 12571 ft: 15032 corp: 14/245b lim: 30 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:06:43.357 [2024-11-26 20:08:56.166137] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.357 [2024-11-26 20:08:56.166474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.166499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.357 [2024-11-26 20:08:56.166559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.357 [2024-11-26 20:08:56.166573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.358 #43 NEW cov: 12571 ft: 15040 corp: 15/260b lim: 30 exec/s: 0 rss: 74Mb L: 15/25 MS: 1 EraseBytes- 00:06:43.358 [2024-11-26 20:08:56.226331] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.358 [2024-11-26 20:08:56.226779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.358 [2024-11-26 20:08:56.226804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.358 [2024-11-26 20:08:56.226865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.358 [2024-11-26 20:08:56.226882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.358 [2024-11-26 20:08:56.226940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.358 [2024-11-26 20:08:56.226955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.358 #44 NEW cov: 12571 ft: 15055 corp: 16/278b lim: 30 exec/s: 44 rss: 74Mb L: 18/25 MS: 1 EraseBytes- 00:06:43.358 [2024-11-26 20:08:56.266476] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.358 [2024-11-26 20:08:56.266814] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:06:43.358 [2024-11-26 20:08:56.267050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.358 [2024-11-26 20:08:56.267076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.358 [2024-11-26 20:08:56.267136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.358 [2024-11-26 20:08:56.267151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.358 [2024-11-26 20:08:56.267210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.358 [2024-11-26 20:08:56.267224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.358 [2024-11-26 20:08:56.267280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.358 [2024-11-26 20:08:56.267295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.617 #45 NEW cov: 12571 ft: 15064 corp: 17/304b lim: 30 exec/s: 45 rss: 74Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:43.617 [2024-11-26 20:08:56.326645] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006a6a 00:06:43.617 [2024-11-26 20:08:56.326772] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006a6a 00:06:43.617 [2024-11-26 20:08:56.326889] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006a6a 00:06:43.617 [2024-11-26 20:08:56.327113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:506a026a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.327139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.327198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6a6a026a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.327212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.327271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6a6a026a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.327285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.617 #49 NEW cov: 12571 ft: 15107 corp: 18/325b lim: 30 exec/s: 49 rss: 74Mb L: 21/26 MS: 4 ChangeBit-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:06:43.617 [2024-11-26 20:08:56.366716] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.617 [2024-11-26 20:08:56.367198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.367228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.367288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.367303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.367361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.367375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.617 #50 NEW cov: 12571 ft: 15122 corp: 19/346b lim: 30 exec/s: 50 rss: 74Mb L: 21/26 MS: 1 CopyPart- 00:06:43.617 [2024-11-26 20:08:56.426939] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:43.617 [2024-11-26 20:08:56.427064] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:43.617 [2024-11-26 20:08:56.427184] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:43.617 [2024-11-26 20:08:56.427407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.427433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.427495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.427510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.427566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.427580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.617 #55 NEW cov: 12571 ft: 15130 corp: 20/364b lim: 30 exec/s: 55 rss: 74Mb L: 18/26 MS: 5 ChangeBit-ShuffleBytes-InsertByte-InsertByte-InsertRepeatedBytes- 00:06:43.617 [2024-11-26 20:08:56.467108] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x31 00:06:43.617 [2024-11-26 20:08:56.467338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.467364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.467425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.467439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.617 #56 NEW cov: 12571 ft: 15174 corp: 21/377b lim: 30 exec/s: 56 rss: 74Mb L: 13/26 MS: 1 ChangeBit- 00:06:43.617 [2024-11-26 20:08:56.507189] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.617 [2024-11-26 20:08:56.507751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.507777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.507902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.507926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.617 [2024-11-26 20:08:56.507984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.617 [2024-11-26 20:08:56.507998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.876 NEW_FUNC[1/2]: 0x136d278 in nvmf_ctrlr_unmask_aen /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2285 00:06:43.876 NEW_FUNC[2/2]: 0x136d508 in nvmf_get_error_log_page /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2339 00:06:43.876 #57 NEW cov: 12581 ft: 15209 corp: 22/402b lim: 30 exec/s: 57 rss: 74Mb L: 25/26 MS: 1 ShuffleBytes- 00:06:43.876 [2024-11-26 20:08:56.567305] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.876 [2024-11-26 20:08:56.567433] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (168596) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.567553] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (168596) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.567784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.567810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.567868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a4a400a4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.567883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.567941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a4a400a4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.567955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.877 #58 NEW cov: 12581 ft: 15282 corp: 23/424b lim: 30 exec/s: 58 rss: 74Mb L: 22/26 MS: 1 InsertRepeatedBytes- 00:06:43.877 [2024-11-26 20:08:56.607415] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.607545] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (43012) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.607774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.607800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.607862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.607877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.877 #59 NEW cov: 12581 ft: 15349 corp: 24/439b lim: 30 exec/s: 59 rss: 74Mb L: 15/26 MS: 1 ChangeByte- 00:06:43.877 [2024-11-26 20:08:56.647605] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.647730] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (168596) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.647864] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (168596) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.648299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.648325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.648390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a4a400a4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.648404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.648463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a4a400a4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.648478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.648539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.648553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.648609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.648623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.877 #60 NEW cov: 12581 ft: 15384 corp: 25/469b lim: 30 exec/s: 60 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:43.877 [2024-11-26 20:08:56.707703] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.708044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.708069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.708129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.708144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.877 #61 NEW cov: 12581 ft: 15443 corp: 26/485b lim: 30 exec/s: 61 rss: 74Mb L: 16/30 MS: 1 ShuffleBytes- 00:06:43.877 [2024-11-26 20:08:56.747868] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.748419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.748445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.748506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.748521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.748578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.748591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.748654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.748668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.877 #62 NEW cov: 12581 ft: 15458 corp: 27/510b lim: 30 exec/s: 62 rss: 74Mb L: 25/30 MS: 1 CrossOver- 00:06:43.877 [2024-11-26 20:08:56.788002] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.788341] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (4100) > buf size (4096) 00:06:43.877 [2024-11-26 20:08:56.788570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.788595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.788660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.788675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.788733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.788746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.877 [2024-11-26 20:08:56.788803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.877 [2024-11-26 20:08:56.788817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.136 #63 NEW cov: 12581 ft: 15473 corp: 28/535b lim: 30 exec/s: 63 rss: 74Mb L: 25/30 MS: 1 CMP- DE: "\001\000\000\000\000\000\004\000"- 00:06:44.136 [2024-11-26 20:08:56.828101] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:44.136 [2024-11-26 20:08:56.828673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.136 [2024-11-26 20:08:56.828699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.136 [2024-11-26 20:08:56.828816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.136 [2024-11-26 20:08:56.828831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.136 [2024-11-26 20:08:56.828886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.136 [2024-11-26 20:08:56.828900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.136 #64 NEW cov: 12581 ft: 15509 corp: 29/560b lim: 30 exec/s: 64 rss: 74Mb L: 25/30 MS: 1 ChangeByte- 00:06:44.136 [2024-11-26 20:08:56.888221] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:44.136 [2024-11-26 20:08:56.888350] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:06:44.136 [2024-11-26 20:08:56.888575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.136 [2024-11-26 20:08:56.888605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.136 [2024-11-26 20:08:56.888662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.136 [2024-11-26 20:08:56.888678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.136 #65 NEW cov: 12581 ft: 15544 corp: 30/572b lim: 30 exec/s: 65 rss: 75Mb L: 12/30 MS: 1 CrossOver- 00:06:44.137 [2024-11-26 20:08:56.948467] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x31 00:06:44.137 [2024-11-26 20:08:56.948726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.137 [2024-11-26 20:08:56.948751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.137 [2024-11-26 20:08:56.948810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00100080 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.137 [2024-11-26 20:08:56.948824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.137 #66 NEW cov: 12581 ft: 15551 corp: 31/585b lim: 30 exec/s: 66 rss: 75Mb L: 13/30 MS: 1 ChangeBit- 00:06:44.137 [2024-11-26 20:08:57.008530] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:44.137 [2024-11-26 20:08:57.008872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.137 [2024-11-26 20:08:57.008898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.137 [2024-11-26 20:08:57.008953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.137 [2024-11-26 20:08:57.008967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.137 #67 NEW cov: 12581 ft: 15556 corp: 32/599b lim: 30 exec/s: 67 rss: 75Mb L: 14/30 MS: 1 EraseBytes- 00:06:44.395 [2024-11-26 20:08:57.068801] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:06:44.395 [2024-11-26 20:08:57.069370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.395 [2024-11-26 20:08:57.069396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.395 [2024-11-26 20:08:57.069453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.069468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.396 [2024-11-26 20:08:57.069522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.069536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.396 [2024-11-26 20:08:57.069591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.069610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.396 #68 NEW cov: 12581 ft: 15571 corp: 33/626b lim: 30 exec/s: 68 rss: 75Mb L: 27/30 MS: 1 CMP- DE: "\001\000\000\006"- 00:06:44.396 [2024-11-26 20:08:57.109153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.109178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.396 [2024-11-26 20:08:57.109235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.109248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.396 #69 NEW cov: 12581 ft: 15604 corp: 34/639b lim: 30 exec/s: 69 rss: 75Mb L: 13/30 MS: 1 ChangeBit- 00:06:44.396 [2024-11-26 20:08:57.148939] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:44.396 [2024-11-26 20:08:57.149292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.149317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.396 [2024-11-26 20:08:57.149375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.149389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.396 #70 NEW cov: 12581 ft: 15606 corp: 35/655b lim: 30 exec/s: 70 rss: 75Mb L: 16/30 MS: 1 EraseBytes- 00:06:44.396 [2024-11-26 20:08:57.209140] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:44.396 [2024-11-26 20:08:57.209612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.209638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.396 [2024-11-26 20:08:57.209695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.209710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.396 [2024-11-26 20:08:57.209764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.396 [2024-11-26 20:08:57.209778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.396 #71 NEW cov: 12581 ft: 15609 corp: 36/677b lim: 30 exec/s: 35 rss: 75Mb L: 22/30 MS: 1 InsertByte- 00:06:44.396 #71 DONE cov: 12581 ft: 15609 corp: 36/677b lim: 30 exec/s: 35 rss: 75Mb 00:06:44.396 ###### Recommended dictionary. ###### 00:06:44.396 "\001\000\000\000\000\000\004\000" # Uses: 0 00:06:44.396 "\001\000\000\006" # Uses: 0 00:06:44.396 ###### End of recommended dictionary. ###### 00:06:44.396 Done 71 runs in 2 second(s) 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.654 20:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:44.654 [2024-11-26 20:08:57.398680] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:44.654 [2024-11-26 20:08:57.398766] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607484 ] 00:06:44.913 [2024-11-26 20:08:57.584327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.913 [2024-11-26 20:08:57.618433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.913 [2024-11-26 20:08:57.677227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.913 [2024-11-26 20:08:57.693593] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:44.913 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.913 INFO: Seed: 553188126 00:06:44.913 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:44.913 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:44.913 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:44.913 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.913 #2 INITED exec/s: 0 rss: 65Mb 00:06:44.913 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.913 This may also happen if the target rejected all inputs we tried so far 00:06:44.913 [2024-11-26 20:08:57.749410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.913 [2024-11-26 20:08:57.749438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.914 [2024-11-26 20:08:57.749499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.914 [2024-11-26 20:08:57.749513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.914 [2024-11-26 20:08:57.749571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.914 [2024-11-26 20:08:57.749584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.914 [2024-11-26 20:08:57.749644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.914 [2024-11-26 20:08:57.749658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.172 NEW_FUNC[1/716]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:45.172 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:45.172 #20 NEW cov: 12258 ft: 12253 corp: 2/29b lim: 35 exec/s: 0 rss: 73Mb L: 28/28 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:45.172 [2024-11-26 20:08:58.060180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.172 [2024-11-26 20:08:58.060211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.172 [2024-11-26 20:08:58.060285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.172 [2024-11-26 20:08:58.060300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.172 [2024-11-26 20:08:58.060356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.172 [2024-11-26 20:08:58.060369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.172 [2024-11-26 20:08:58.060422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.172 [2024-11-26 20:08:58.060435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.432 #21 NEW cov: 12373 ft: 12947 corp: 3/59b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CopyPart- 00:06:45.432 [2024-11-26 20:08:58.120177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.120203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.120257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500b845 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.120270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.120324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.120337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.120389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.120403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.432 #22 NEW cov: 12379 ft: 13149 corp: 4/89b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:45.432 [2024-11-26 20:08:58.180383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.180408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.180479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.180493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.180548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.180562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.180623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.180638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.432 #23 NEW cov: 12464 ft: 13429 corp: 5/119b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBit- 00:06:45.432 [2024-11-26 20:08:58.220392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.220419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.220474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.220488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.220540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.220554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.220609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.220622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.432 #24 NEW cov: 12464 ft: 13604 corp: 6/153b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:06:45.432 [2024-11-26 20:08:58.280644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.280669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.280726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500b845 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.280740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.280792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.432 [2024-11-26 20:08:58.280806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.432 [2024-11-26 20:08:58.280857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-11-26 20:08:58.280870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.433 #25 NEW cov: 12464 ft: 13641 corp: 7/187b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:45.433 [2024-11-26 20:08:58.340762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-11-26 20:08:58.340787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.433 [2024-11-26 20:08:58.340843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-11-26 20:08:58.340857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.433 [2024-11-26 20:08:58.340910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-11-26 20:08:58.340940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.433 [2024-11-26 20:08:58.340992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45005545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.433 [2024-11-26 20:08:58.341009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.691 #26 NEW cov: 12464 ft: 13674 corp: 8/221b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:06:45.691 [2024-11-26 20:08:58.400840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.691 [2024-11-26 20:08:58.400866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.691 [2024-11-26 20:08:58.400919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500b845 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.691 [2024-11-26 20:08:58.400932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.691 [2024-11-26 20:08:58.400987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.691 [2024-11-26 20:08:58.401000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.691 #27 NEW cov: 12464 ft: 14214 corp: 9/244b lim: 35 exec/s: 0 rss: 73Mb L: 23/34 MS: 1 EraseBytes- 00:06:45.692 [2024-11-26 20:08:58.441112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.441139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.441194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.441207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.441260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.441273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.441325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.441338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.692 #28 NEW cov: 12464 ft: 14239 corp: 10/274b lim: 35 exec/s: 0 rss: 73Mb L: 30/34 MS: 1 ChangeBinInt- 00:06:45.692 [2024-11-26 20:08:58.481314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.481340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.481396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:2800b828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.481409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.481460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28450028 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.481474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.481527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.481546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.481603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.481616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.692 #29 NEW cov: 12464 ft: 14341 corp: 11/309b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:45.692 [2024-11-26 20:08:58.521295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.521321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.521375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.521389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.521441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45000045 cdw11:45001e45 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.521455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.521506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.521520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.692 #30 NEW cov: 12464 ft: 14357 corp: 12/339b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 1 ChangeBinInt- 00:06:45.692 [2024-11-26 20:08:58.581302] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:45.692 [2024-11-26 20:08:58.581530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.581557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.581612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.581627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.581682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:00004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.581699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.692 [2024-11-26 20:08:58.581751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:95450000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.692 [2024-11-26 20:08:58.581767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.692 #31 NEW cov: 12475 ft: 14480 corp: 13/369b lim: 35 exec/s: 0 rss: 74Mb L: 30/35 MS: 1 CMP- DE: "\000\000\000\225"- 00:06:45.952 [2024-11-26 20:08:58.621587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.621618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.952 [2024-11-26 20:08:58.621679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.621694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.952 [2024-11-26 20:08:58.621746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.621759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.952 [2024-11-26 20:08:58.621812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004557 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.621825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.952 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:45.952 #37 NEW cov: 12498 ft: 14572 corp: 14/400b lim: 35 exec/s: 0 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:06:45.952 [2024-11-26 20:08:58.661431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.661457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.952 [2024-11-26 20:08:58.661527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.661542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.952 #38 NEW cov: 12498 ft: 14877 corp: 15/419b lim: 35 exec/s: 0 rss: 74Mb L: 19/35 MS: 1 EraseBytes- 00:06:45.952 [2024-11-26 20:08:58.701251] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:45.952 [2024-11-26 20:08:58.701485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8a000000 cdw11:00000095 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.701512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.952 #41 NEW cov: 12498 ft: 15169 corp: 16/429b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 3 PersAutoDict-InsertByte-PersAutoDict- DE: "\000\000\000\225"-"\000\000\000\225"- 00:06:45.952 [2024-11-26 20:08:58.741687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.741713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.952 [2024-11-26 20:08:58.741783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.952 [2024-11-26 20:08:58.741797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.952 #42 NEW cov: 12498 ft: 15191 corp: 17/444b lim: 35 exec/s: 42 rss: 74Mb L: 15/35 MS: 1 EraseBytes- 00:06:45.952 [2024-11-26 20:08:58.781855] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:45.952 [2024-11-26 20:08:58.782092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.782119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.953 [2024-11-26 20:08:58.782175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.782192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.953 [2024-11-26 20:08:58.782248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:00004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.782262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.953 [2024-11-26 20:08:58.782318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:95450000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.782333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.953 #43 NEW cov: 12498 ft: 15215 corp: 18/474b lim: 35 exec/s: 43 rss: 74Mb L: 30/35 MS: 1 ChangeByte- 00:06:45.953 [2024-11-26 20:08:58.842042] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:45.953 [2024-11-26 20:08:58.842277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:8a004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.842303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.953 [2024-11-26 20:08:58.842360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.842375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.953 [2024-11-26 20:08:58.842429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:00004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.842442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.953 [2024-11-26 20:08:58.842497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:95450000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.953 [2024-11-26 20:08:58.842512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.212 #44 NEW cov: 12498 ft: 15243 corp: 19/504b lim: 35 exec/s: 44 rss: 74Mb L: 30/35 MS: 1 ChangeByte- 00:06:46.212 [2024-11-26 20:08:58.902423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.902449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:58.902521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500b845 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.902535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:58.902589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.902607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:58.902663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.902676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.212 #45 NEW cov: 12498 ft: 15315 corp: 20/538b lim: 35 exec/s: 45 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:46.212 [2024-11-26 20:08:58.962287] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.212 [2024-11-26 20:08:58.962610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.962637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:58.962692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:00004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.962706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:58.962758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0b450000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.962772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:58.962824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45000045 cdw11:95000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:58.962837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.212 #46 NEW cov: 12498 ft: 15322 corp: 21/572b lim: 35 exec/s: 46 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "\000\000\000\013"- 00:06:46.212 [2024-11-26 20:08:59.002542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.002568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.002642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.002657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.002712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.002725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.212 #47 NEW cov: 12498 ft: 15355 corp: 22/597b lim: 35 exec/s: 47 rss: 74Mb L: 25/35 MS: 1 EraseBytes- 00:06:46.212 [2024-11-26 20:08:59.042606] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.212 [2024-11-26 20:08:59.042827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.042853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.042909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.042923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.042976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:00004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.042990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.043043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:21950000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.043058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.212 #48 NEW cov: 12498 ft: 15368 corp: 23/628b lim: 35 exec/s: 48 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:06:46.212 [2024-11-26 20:08:59.082688] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.212 [2024-11-26 20:08:59.082906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3f45000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.082931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.082989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:55450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.083003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.083054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.083067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.083123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00950000 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.083139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.212 #49 NEW cov: 12498 ft: 15389 corp: 24/659b lim: 35 exec/s: 49 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:06:46.212 [2024-11-26 20:08:59.123019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.123046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.123104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.123118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.123171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.123185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.212 [2024-11-26 20:08:59.123237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:4500452b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.212 [2024-11-26 20:08:59.123251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.470 #50 NEW cov: 12498 ft: 15390 corp: 25/690b lim: 35 exec/s: 50 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:06:46.470 [2024-11-26 20:08:59.162840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.162865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.162922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:ba0045c2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.162936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.470 #51 NEW cov: 12498 ft: 15410 corp: 26/705b lim: 35 exec/s: 51 rss: 74Mb L: 15/35 MS: 1 ChangeBinInt- 00:06:46.470 [2024-11-26 20:08:59.223434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.223462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.223515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:458a0055 cdw11:8a008a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.223528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.223582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.223595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.223651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.223664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.223716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.223729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.470 #52 NEW cov: 12498 ft: 15427 corp: 27/740b lim: 35 exec/s: 52 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:46.470 [2024-11-26 20:08:59.263179] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.470 [2024-11-26 20:08:59.263404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3f45000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.263429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.263484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:55450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.263498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.263552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.263565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.263619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00950000 cdw11:23004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.263635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.470 #53 NEW cov: 12498 ft: 15445 corp: 28/772b lim: 35 exec/s: 53 rss: 74Mb L: 32/35 MS: 1 InsertByte- 00:06:46.470 [2024-11-26 20:08:59.323560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.323585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.323641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500b845 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.323656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.323707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.323724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.323777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.323790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.470 #54 NEW cov: 12498 ft: 15466 corp: 29/806b lim: 35 exec/s: 54 rss: 74Mb L: 34/35 MS: 1 CrossOver- 00:06:46.470 [2024-11-26 20:08:59.363650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.363675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.363731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.363745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.363799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:00004500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.363811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.470 [2024-11-26 20:08:59.363865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450095 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.470 [2024-11-26 20:08:59.363878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.470 #55 NEW cov: 12498 ft: 15471 corp: 30/840b lim: 35 exec/s: 55 rss: 74Mb L: 34/35 MS: 1 PersAutoDict- DE: "\000\000\000\225"- 00:06:46.728 [2024-11-26 20:08:59.403935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4582000a cdw11:82008282 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.403961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.404017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.404030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.404086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.404099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.404152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.404166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.404219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.404232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.728 #56 NEW cov: 12498 ft: 15540 corp: 31/875b lim: 35 exec/s: 56 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:46.728 [2024-11-26 20:08:59.443661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.443689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.443743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:450a0055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.443757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.728 #57 NEW cov: 12498 ft: 15546 corp: 32/895b lim: 35 exec/s: 57 rss: 74Mb L: 20/35 MS: 1 CrossOver- 00:06:46.728 [2024-11-26 20:08:59.484038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.484062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.484117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:baba00bb cdw11:ba0047ba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.484131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.484186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:454500b3 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.484199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.484250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.484263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.728 #58 NEW cov: 12498 ft: 15605 corp: 33/929b lim: 35 exec/s: 58 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:46.728 [2024-11-26 20:08:59.544091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.544115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.544170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:4500b845 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.544184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.544237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.544251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.728 #59 NEW cov: 12498 ft: 15608 corp: 34/952b lim: 35 exec/s: 59 rss: 74Mb L: 23/35 MS: 1 ChangeByte- 00:06:46.728 [2024-11-26 20:08:59.604392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.604418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.604474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450055 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.604487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.604539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:00004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.604556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.728 [2024-11-26 20:08:59.604612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:95000021 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.728 [2024-11-26 20:08:59.604625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.728 #60 NEW cov: 12498 ft: 15613 corp: 35/983b lim: 35 exec/s: 60 rss: 75Mb L: 31/35 MS: 1 ShuffleBytes- 00:06:46.987 [2024-11-26 20:08:59.664549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:4500000b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.664575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.987 [2024-11-26 20:08:59.664647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:baba00bb cdw11:ba0047ba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.664661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.987 [2024-11-26 20:08:59.664716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:454500b3 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.664729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.987 [2024-11-26 20:08:59.664783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.664796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.987 #61 NEW cov: 12498 ft: 15622 corp: 36/1017b lim: 35 exec/s: 61 rss: 75Mb L: 34/35 MS: 1 PersAutoDict- DE: "\000\000\000\013"- 00:06:46.987 [2024-11-26 20:08:59.724709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4545000a cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.724735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.987 [2024-11-26 20:08:59.724794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.724808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.987 [2024-11-26 20:08:59.724876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.724890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.987 [2024-11-26 20:08:59.724942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:45450045 cdw11:45004545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.987 [2024-11-26 20:08:59.724956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.987 #62 NEW cov: 12498 ft: 15626 corp: 37/1049b lim: 35 exec/s: 31 rss: 75Mb L: 32/35 MS: 1 CrossOver- 00:06:46.987 #62 DONE cov: 12498 ft: 15626 corp: 37/1049b lim: 35 exec/s: 31 rss: 75Mb 00:06:46.987 ###### Recommended dictionary. ###### 00:06:46.987 "\000\000\000\225" # Uses: 3 00:06:46.987 "\000\000\000\013" # Uses: 1 00:06:46.987 ###### End of recommended dictionary. ###### 00:06:46.987 Done 62 runs in 2 second(s) 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:46.987 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:46.988 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:46.988 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:46.988 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.988 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.988 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.988 20:08:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:46.988 [2024-11-26 20:08:59.890849] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:46.988 [2024-11-26 20:08:59.890925] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607769 ] 00:06:47.246 [2024-11-26 20:09:00.085292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.246 [2024-11-26 20:09:00.127862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.505 [2024-11-26 20:09:00.187305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.505 [2024-11-26 20:09:00.203688] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:47.505 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.505 INFO: Seed: 3063200148 00:06:47.505 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:47.505 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:47.505 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:47.505 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.505 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.505 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.505 This may also happen if the target rejected all inputs we tried so far 00:06:47.763 NEW_FUNC[1/705]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:47.763 NEW_FUNC[2/705]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.763 #4 NEW cov: 12137 ft: 12126 corp: 2/6b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:47.763 [2024-11-26 20:09:00.631090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.764 [2024-11-26 20:09:00.631144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.764 NEW_FUNC[1/17]: 0x1379068 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3484 00:06:47.764 NEW_FUNC[2/17]: 0x1379be8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3426 00:06:47.764 #5 NEW cov: 12527 ft: 13456 corp: 3/14b lim: 20 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:48.022 #6 NEW cov: 12533 ft: 13649 corp: 4/24b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:48.022 #7 NEW cov: 12618 ft: 13907 corp: 5/29b lim: 20 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:48.022 #8 NEW cov: 12618 ft: 13971 corp: 6/39b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:48.022 #9 NEW cov: 12618 ft: 14036 corp: 7/45b lim: 20 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:06:48.280 #10 NEW cov: 12618 ft: 14158 corp: 8/56b lim: 20 exec/s: 0 rss: 74Mb L: 11/11 MS: 1 InsertByte- 00:06:48.280 #11 NEW cov: 12618 ft: 14220 corp: 9/62b lim: 20 exec/s: 0 rss: 74Mb L: 6/11 MS: 1 InsertByte- 00:06:48.280 #12 NEW cov: 12618 ft: 14277 corp: 10/69b lim: 20 exec/s: 0 rss: 74Mb L: 7/11 MS: 1 InsertByte- 00:06:48.280 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:48.280 #13 NEW cov: 12641 ft: 14345 corp: 11/74b lim: 20 exec/s: 0 rss: 74Mb L: 5/11 MS: 1 ChangeBit- 00:06:48.538 #14 NEW cov: 12641 ft: 14367 corp: 12/80b lim: 20 exec/s: 0 rss: 74Mb L: 6/11 MS: 1 ShuffleBytes- 00:06:48.538 #15 NEW cov: 12641 ft: 14423 corp: 13/88b lim: 20 exec/s: 15 rss: 74Mb L: 8/11 MS: 1 InsertByte- 00:06:48.538 #16 NEW cov: 12641 ft: 14464 corp: 14/99b lim: 20 exec/s: 16 rss: 74Mb L: 11/11 MS: 1 ChangeByte- 00:06:48.538 #17 NEW cov: 12641 ft: 14501 corp: 15/106b lim: 20 exec/s: 17 rss: 74Mb L: 7/11 MS: 1 CrossOver- 00:06:48.797 #18 NEW cov: 12645 ft: 14769 corp: 16/119b lim: 20 exec/s: 18 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:06:48.797 [2024-11-26 20:09:01.503771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.797 [2024-11-26 20:09:01.503813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.797 #19 NEW cov: 12645 ft: 14849 corp: 17/128b lim: 20 exec/s: 19 rss: 75Mb L: 9/13 MS: 1 InsertByte- 00:06:48.797 #20 NEW cov: 12645 ft: 14918 corp: 18/136b lim: 20 exec/s: 20 rss: 75Mb L: 8/13 MS: 1 InsertByte- 00:06:48.797 #21 NEW cov: 12645 ft: 14928 corp: 19/142b lim: 20 exec/s: 21 rss: 75Mb L: 6/13 MS: 1 EraseBytes- 00:06:49.055 NEW_FUNC[1/2]: 0x14eeed8 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:784 00:06:49.056 NEW_FUNC[2/2]: 0x15163e8 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3702 00:06:49.056 #22 NEW cov: 12701 ft: 15004 corp: 20/151b lim: 20 exec/s: 22 rss: 75Mb L: 9/13 MS: 1 ChangeBit- 00:06:49.056 #28 NEW cov: 12701 ft: 15017 corp: 21/157b lim: 20 exec/s: 28 rss: 75Mb L: 6/13 MS: 1 InsertByte- 00:06:49.056 #29 NEW cov: 12701 ft: 15037 corp: 22/164b lim: 20 exec/s: 29 rss: 75Mb L: 7/13 MS: 1 EraseBytes- 00:06:49.056 #32 NEW cov: 12718 ft: 15247 corp: 23/180b lim: 20 exec/s: 32 rss: 75Mb L: 16/16 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:49.056 #33 NEW cov: 12719 ft: 15274 corp: 24/189b lim: 20 exec/s: 33 rss: 75Mb L: 9/16 MS: 1 CMP- DE: "\000\001"- 00:06:49.315 #34 NEW cov: 12719 ft: 15295 corp: 25/207b lim: 20 exec/s: 34 rss: 75Mb L: 18/18 MS: 1 CopyPart- 00:06:49.315 #35 NEW cov: 12719 ft: 15298 corp: 26/213b lim: 20 exec/s: 35 rss: 75Mb L: 6/18 MS: 1 ChangeByte- 00:06:49.315 #36 NEW cov: 12719 ft: 15321 corp: 27/226b lim: 20 exec/s: 36 rss: 75Mb L: 13/18 MS: 1 InsertRepeatedBytes- 00:06:49.315 #37 NEW cov: 12719 ft: 15336 corp: 28/237b lim: 20 exec/s: 37 rss: 75Mb L: 11/18 MS: 1 CopyPart- 00:06:49.574 #38 NEW cov: 12719 ft: 15345 corp: 29/248b lim: 20 exec/s: 19 rss: 75Mb L: 11/18 MS: 1 PersAutoDict- DE: "\000\001"- 00:06:49.574 #38 DONE cov: 12719 ft: 15345 corp: 29/248b lim: 20 exec/s: 19 rss: 75Mb 00:06:49.574 ###### Recommended dictionary. ###### 00:06:49.574 "\000\001" # Uses: 1 00:06:49.574 ###### End of recommended dictionary. ###### 00:06:49.574 Done 38 runs in 2 second(s) 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.574 20:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:49.574 [2024-11-26 20:09:02.426055] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:49.574 [2024-11-26 20:09:02.426126] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608429 ] 00:06:49.834 [2024-11-26 20:09:02.614616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.834 [2024-11-26 20:09:02.651489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.834 [2024-11-26 20:09:02.710694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.834 [2024-11-26 20:09:02.727002] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:49.834 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.834 INFO: Seed: 1292222119 00:06:49.834 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:49.834 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:49.834 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:49.834 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.834 #2 INITED exec/s: 0 rss: 65Mb 00:06:49.834 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.834 This may also happen if the target rejected all inputs we tried so far 00:06:50.093 [2024-11-26 20:09:02.772731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.093 [2024-11-26 20:09:02.772760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.093 [2024-11-26 20:09:02.772815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.093 [2024-11-26 20:09:02.772835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.093 [2024-11-26 20:09:02.772888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.093 [2024-11-26 20:09:02.772901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.352 NEW_FUNC[1/717]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:50.352 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:50.352 #5 NEW cov: 12281 ft: 12281 corp: 2/27b lim: 35 exec/s: 0 rss: 73Mb L: 26/26 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:06:50.352 [2024-11-26 20:09:03.103585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.103623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.103678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.103692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.103745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.103759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.352 #7 NEW cov: 12395 ft: 12779 corp: 3/52b lim: 35 exec/s: 0 rss: 73Mb L: 25/26 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:50.352 [2024-11-26 20:09:03.143510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.143537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.143592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.143611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.143663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2140003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.143676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.352 #13 NEW cov: 12401 ft: 12987 corp: 4/77b lim: 35 exec/s: 0 rss: 74Mb L: 25/26 MS: 1 ChangeByte- 00:06:50.352 [2024-11-26 20:09:03.203773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.203800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.203871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.203886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.203939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.203956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.352 #14 NEW cov: 12486 ft: 13435 corp: 5/102b lim: 35 exec/s: 0 rss: 74Mb L: 25/26 MS: 1 EraseBytes- 00:06:50.352 [2024-11-26 20:09:03.264034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.264061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.264116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.264130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.264181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.264195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.352 [2024-11-26 20:09:03.264249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.352 [2024-11-26 20:09:03.264261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.611 #15 NEW cov: 12486 ft: 13841 corp: 6/135b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 CrossOver- 00:06:50.611 [2024-11-26 20:09:03.304256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.304281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.304334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.304347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.304399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.304413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.304467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.304479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.304531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.304544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.611 #16 NEW cov: 12486 ft: 13944 corp: 7/170b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:50.611 [2024-11-26 20:09:03.364121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.364146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.364217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.364234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.364287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.364300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.611 #17 NEW cov: 12486 ft: 14025 corp: 8/195b lim: 35 exec/s: 0 rss: 74Mb L: 25/35 MS: 1 ShuffleBytes- 00:06:50.611 [2024-11-26 20:09:03.424381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.424407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.424477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.424491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.424544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.424558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.424610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.424623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.611 #18 NEW cov: 12486 ft: 14073 corp: 9/229b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 CrossOver- 00:06:50.611 [2024-11-26 20:09:03.464371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.464396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.464465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.464479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.464532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.464546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.611 #19 NEW cov: 12486 ft: 14130 corp: 10/255b lim: 35 exec/s: 0 rss: 74Mb L: 26/35 MS: 1 InsertByte- 00:06:50.611 [2024-11-26 20:09:03.504495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.504520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.504574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.504588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.611 [2024-11-26 20:09:03.504644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.611 [2024-11-26 20:09:03.504661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.611 #20 NEW cov: 12486 ft: 14177 corp: 11/281b lim: 35 exec/s: 0 rss: 74Mb L: 26/35 MS: 1 InsertRepeatedBytes- 00:06:50.870 [2024-11-26 20:09:03.544605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.544630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.870 [2024-11-26 20:09:03.544682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.544696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.870 [2024-11-26 20:09:03.544748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.544761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.870 #21 NEW cov: 12486 ft: 14199 corp: 12/307b lim: 35 exec/s: 0 rss: 74Mb L: 26/35 MS: 1 CrossOver- 00:06:50.870 [2024-11-26 20:09:03.605060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.605085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.870 [2024-11-26 20:09:03.605157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.605171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.870 [2024-11-26 20:09:03.605222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.605236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.870 [2024-11-26 20:09:03.605287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f2f2f2f2 cdw11:54540002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.605301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.870 [2024-11-26 20:09:03.605353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:54545454 cdw11:54540003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.605366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.870 #22 NEW cov: 12486 ft: 14217 corp: 13/342b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:50.870 [2024-11-26 20:09:03.664957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.870 [2024-11-26 20:09:03.664981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.871 [2024-11-26 20:09:03.665049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.665063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.871 [2024-11-26 20:09:03.665116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2a7 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.665132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.871 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:50.871 #23 NEW cov: 12509 ft: 14277 corp: 14/369b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 InsertByte- 00:06:50.871 [2024-11-26 20:09:03.705060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.705085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.871 [2024-11-26 20:09:03.705138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f22df2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.705151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.871 [2024-11-26 20:09:03.705203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.705217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.871 #24 NEW cov: 12509 ft: 14299 corp: 15/395b lim: 35 exec/s: 0 rss: 74Mb L: 26/35 MS: 1 InsertByte- 00:06:50.871 [2024-11-26 20:09:03.765395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2ff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.765421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.871 [2024-11-26 20:09:03.765476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8acf0931 cdw11:da910000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.765490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.871 [2024-11-26 20:09:03.765541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.765555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.871 [2024-11-26 20:09:03.765612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.871 [2024-11-26 20:09:03.765625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.871 #25 NEW cov: 12509 ft: 14320 corp: 16/429b lim: 35 exec/s: 25 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "E\0111\212\317\332\221\000"- 00:06:51.130 [2024-11-26 20:09:03.805518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.805543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.805615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:318a4509 cdw11:cfda0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.805630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.805682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f200f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.805696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.805752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.805765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.130 #26 NEW cov: 12509 ft: 14349 corp: 17/462b lim: 35 exec/s: 26 rss: 74Mb L: 33/35 MS: 1 PersAutoDict- DE: "E\0111\212\317\332\221\000"- 00:06:51.130 [2024-11-26 20:09:03.845611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.845654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.845706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.845720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.845769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.845783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.845835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f214f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.845848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.130 #27 NEW cov: 12509 ft: 14383 corp: 18/492b lim: 35 exec/s: 27 rss: 74Mb L: 30/35 MS: 1 CopyPart- 00:06:51.130 [2024-11-26 20:09:03.885608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:45090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.885634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.885706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:da918acf cdw11:00f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.885720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.885774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.885787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.130 #28 NEW cov: 12509 ft: 14402 corp: 19/517b lim: 35 exec/s: 28 rss: 75Mb L: 25/35 MS: 1 PersAutoDict- DE: "E\0111\212\317\332\221\000"- 00:06:51.130 [2024-11-26 20:09:03.925712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.925737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.925808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.925822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.925873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2e2a7 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.925887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.130 #29 NEW cov: 12509 ft: 14487 corp: 20/544b lim: 35 exec/s: 29 rss: 75Mb L: 27/35 MS: 1 ChangeBit- 00:06:51.130 [2024-11-26 20:09:03.986031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.986057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.986129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f8f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.986143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.986196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.986210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:03.986262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f2f2a7f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:03.986275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.130 #30 NEW cov: 12509 ft: 14531 corp: 21/577b lim: 35 exec/s: 30 rss: 75Mb L: 33/35 MS: 1 CopyPart- 00:06:51.130 [2024-11-26 20:09:04.046334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:a7f2c0e2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:04.046359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:04.046413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f8f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:04.046427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:04.046479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:04.046492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:04.046545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a7f2f2e2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:04.046558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.130 [2024-11-26 20:09:04.046613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.130 [2024-11-26 20:09:04.046626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.389 #31 NEW cov: 12509 ft: 14541 corp: 22/612b lim: 35 exec/s: 31 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:06:51.389 [2024-11-26 20:09:04.106349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.389 [2024-11-26 20:09:04.106374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.389 [2024-11-26 20:09:04.106427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.106440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.106497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.106510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.106562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0931ff45 cdw11:8acf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.106574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.390 #32 NEW cov: 12509 ft: 14596 corp: 23/645b lim: 35 exec/s: 32 rss: 75Mb L: 33/35 MS: 1 PersAutoDict- DE: "E\0111\212\317\332\221\000"- 00:06:51.390 [2024-11-26 20:09:04.146268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.146294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.146362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.146376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.146431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.146444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.390 #33 NEW cov: 12509 ft: 14601 corp: 24/672b lim: 35 exec/s: 33 rss: 75Mb L: 27/35 MS: 1 CrossOver- 00:06:51.390 [2024-11-26 20:09:04.186747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.186773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.186827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.186840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.186894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.186907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.186959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f2f2f2f2 cdw11:54540002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.186972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.187024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:54545454 cdw11:54540003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.187038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.390 #34 NEW cov: 12509 ft: 14609 corp: 25/707b lim: 35 exec/s: 34 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:06:51.390 [2024-11-26 20:09:04.246416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.246445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.246514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.246529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.390 #35 NEW cov: 12509 ft: 14891 corp: 26/725b lim: 35 exec/s: 35 rss: 75Mb L: 18/35 MS: 1 EraseBytes- 00:06:51.390 [2024-11-26 20:09:04.306915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.306940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.306995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f6f8f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.307009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.307062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.307075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.390 [2024-11-26 20:09:04.307127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f2f2e2a7 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.390 [2024-11-26 20:09:04.307140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.649 #36 NEW cov: 12509 ft: 14906 corp: 27/759b lim: 35 exec/s: 36 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:06:51.649 [2024-11-26 20:09:04.347208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:a7f2c0ea cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.347233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.347289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f8f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.347303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.347356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.347369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.347422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a7f2f2e2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.347435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.347487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.347500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.649 #37 NEW cov: 12509 ft: 14927 corp: 28/794b lim: 35 exec/s: 37 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:06:51.649 [2024-11-26 20:09:04.407210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.407238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.407306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.407320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.407373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.407386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.407439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f2f2f2f2 cdw11:54540003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.407452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.649 #38 NEW cov: 12509 ft: 14935 corp: 29/825b lim: 35 exec/s: 38 rss: 75Mb L: 31/35 MS: 1 CrossOver- 00:06:51.649 [2024-11-26 20:09:04.447174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.447198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.447265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f22df2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.447279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.447332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.447346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.649 #39 NEW cov: 12509 ft: 15008 corp: 30/851b lim: 35 exec/s: 39 rss: 75Mb L: 26/35 MS: 1 ChangeBit- 00:06:51.649 [2024-11-26 20:09:04.507586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:a7f2c0ea cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.507614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.507686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f8f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.507700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.507752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f2f2f2f2 cdw11:0af20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.507765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.507817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a7f2f2e2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.507830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.507880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:f2f2f2f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.507893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.649 #40 NEW cov: 12509 ft: 15016 corp: 31/886b lim: 35 exec/s: 40 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:06:51.649 [2024-11-26 20:09:04.567328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f8c0f2 cdw11:f2ff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.567355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.649 [2024-11-26 20:09:04.567426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8acf0931 cdw11:da910000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.649 [2024-11-26 20:09:04.567440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.908 #41 NEW cov: 12509 ft: 15025 corp: 32/904b lim: 35 exec/s: 41 rss: 76Mb L: 18/35 MS: 1 EraseBytes- 00:06:51.908 [2024-11-26 20:09:04.627690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f2f2c0f2 cdw11:f2f20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.627716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.627769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.627783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.627836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.627850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.908 #42 NEW cov: 12509 ft: 15031 corp: 33/929b lim: 35 exec/s: 42 rss: 76Mb L: 25/35 MS: 1 CrossOver- 00:06:51.908 [2024-11-26 20:09:04.667591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.667621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.667675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.667689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.908 #43 NEW cov: 12509 ft: 15054 corp: 34/949b lim: 35 exec/s: 43 rss: 76Mb L: 20/35 MS: 1 InsertRepeatedBytes- 00:06:51.908 [2024-11-26 20:09:04.708014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.708039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.708093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.708106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.708160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.708190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.708245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.708258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.908 #44 NEW cov: 12509 ft: 15059 corp: 35/979b lim: 35 exec/s: 44 rss: 76Mb L: 30/35 MS: 1 CopyPart- 00:06:51.908 [2024-11-26 20:09:04.748282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.748307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.748362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f2f2ffff cdw11:f2ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.748375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.908 [2024-11-26 20:09:04.748428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.908 [2024-11-26 20:09:04.748442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.909 [2024-11-26 20:09:04.748495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.909 [2024-11-26 20:09:04.748508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.909 [2024-11-26 20:09:04.748561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.909 [2024-11-26 20:09:04.748574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.909 #45 NEW cov: 12509 ft: 15075 corp: 36/1014b lim: 35 exec/s: 22 rss: 76Mb L: 35/35 MS: 1 CrossOver- 00:06:51.909 #45 DONE cov: 12509 ft: 15075 corp: 36/1014b lim: 35 exec/s: 22 rss: 76Mb 00:06:51.909 ###### Recommended dictionary. ###### 00:06:51.909 "E\0111\212\317\332\221\000" # Uses: 3 00:06:51.909 ###### End of recommended dictionary. ###### 00:06:51.909 Done 45 runs in 2 second(s) 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:52.168 20:09:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:52.168 [2024-11-26 20:09:04.935638] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:52.168 [2024-11-26 20:09:04.935711] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609194 ] 00:06:52.428 [2024-11-26 20:09:05.123095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.428 [2024-11-26 20:09:05.160868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.428 [2024-11-26 20:09:05.220086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.428 [2024-11-26 20:09:05.236452] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:52.428 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.428 INFO: Seed: 3800301180 00:06:52.428 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:52.428 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:52.428 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:52.428 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.428 #2 INITED exec/s: 0 rss: 65Mb 00:06:52.428 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.428 This may also happen if the target rejected all inputs we tried so far 00:06:52.428 [2024-11-26 20:09:05.312926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.428 [2024-11-26 20:09:05.312963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.428 [2024-11-26 20:09:05.313089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.428 [2024-11-26 20:09:05.313107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.945 NEW_FUNC[1/717]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:52.945 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.945 #20 NEW cov: 12293 ft: 12294 corp: 2/20b lim: 45 exec/s: 0 rss: 73Mb L: 19/19 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:52.945 [2024-11-26 20:09:05.664180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.664227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.945 [2024-11-26 20:09:05.664365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.664386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.945 [2024-11-26 20:09:05.664513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.664533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.945 #21 NEW cov: 12406 ft: 13212 corp: 3/50b lim: 45 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:52.945 [2024-11-26 20:09:05.734314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.734344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.945 [2024-11-26 20:09:05.734477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.734495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.945 [2024-11-26 20:09:05.734620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.734638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.945 #22 NEW cov: 12412 ft: 13470 corp: 4/80b lim: 45 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:06:52.945 [2024-11-26 20:09:05.804471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.804503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.945 [2024-11-26 20:09:05.804629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.804648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.945 [2024-11-26 20:09:05.804770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.804786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.945 #23 NEW cov: 12497 ft: 13732 corp: 5/111b lim: 45 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 InsertByte- 00:06:52.945 [2024-11-26 20:09:05.854312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:ff0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.854340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.945 [2024-11-26 20:09:05.854466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.945 [2024-11-26 20:09:05.854484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.204 #24 NEW cov: 12497 ft: 13808 corp: 6/130b lim: 45 exec/s: 0 rss: 74Mb L: 19/31 MS: 1 ChangeByte- 00:06:53.204 [2024-11-26 20:09:05.904567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:05.904595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.204 [2024-11-26 20:09:05.904721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:05.904739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.204 #25 NEW cov: 12497 ft: 13857 corp: 7/154b lim: 45 exec/s: 0 rss: 74Mb L: 24/31 MS: 1 EraseBytes- 00:06:53.204 [2024-11-26 20:09:05.974987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:05.975021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.204 [2024-11-26 20:09:05.975150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:05.975168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.204 [2024-11-26 20:09:05.975294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:05.975312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.204 #26 NEW cov: 12497 ft: 13945 corp: 8/184b lim: 45 exec/s: 0 rss: 74Mb L: 30/31 MS: 1 ChangeBinInt- 00:06:53.204 [2024-11-26 20:09:06.024722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:06.024751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.204 [2024-11-26 20:09:06.024873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a0f8a cdw11:8a8a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:06.024890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.204 #27 NEW cov: 12497 ft: 13986 corp: 9/209b lim: 45 exec/s: 0 rss: 74Mb L: 25/31 MS: 1 InsertByte- 00:06:53.204 [2024-11-26 20:09:06.095440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:06.095468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.204 [2024-11-26 20:09:06.095605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:06.095623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.204 [2024-11-26 20:09:06.095742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.204 [2024-11-26 20:09:06.095762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.204 #28 NEW cov: 12497 ft: 14071 corp: 10/239b lim: 45 exec/s: 0 rss: 74Mb L: 30/31 MS: 1 CrossOver- 00:06:53.463 [2024-11-26 20:09:06.145540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.463 [2024-11-26 20:09:06.145569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.463 [2024-11-26 20:09:06.145706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.463 [2024-11-26 20:09:06.145727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.463 [2024-11-26 20:09:06.145849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.463 [2024-11-26 20:09:06.145867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.463 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:53.463 #29 NEW cov: 12520 ft: 14135 corp: 11/273b lim: 45 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:06:53.463 [2024-11-26 20:09:06.215465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.463 [2024-11-26 20:09:06.215495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.463 [2024-11-26 20:09:06.215624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a0f8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.463 [2024-11-26 20:09:06.215644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.463 #30 NEW cov: 12520 ft: 14302 corp: 12/298b lim: 45 exec/s: 0 rss: 74Mb L: 25/34 MS: 1 CopyPart- 00:06:53.464 [2024-11-26 20:09:06.285957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:d80f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.285986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.464 [2024-11-26 20:09:06.286114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a0f8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.286133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.464 [2024-11-26 20:09:06.286258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.286277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.464 #31 NEW cov: 12520 ft: 14312 corp: 13/329b lim: 45 exec/s: 31 rss: 74Mb L: 31/34 MS: 1 InsertByte- 00:06:53.464 [2024-11-26 20:09:06.356888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.356918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.464 [2024-11-26 20:09:06.357042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.357060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.464 [2024-11-26 20:09:06.357186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.357205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.464 [2024-11-26 20:09:06.357326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.357344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.464 [2024-11-26 20:09:06.357469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.464 [2024-11-26 20:09:06.357488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.464 #32 NEW cov: 12520 ft: 14727 corp: 14/374b lim: 45 exec/s: 32 rss: 74Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:53.723 [2024-11-26 20:09:06.406104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:d80f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.406132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.406258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.406276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.723 #33 NEW cov: 12520 ft: 14839 corp: 15/397b lim: 45 exec/s: 33 rss: 74Mb L: 23/45 MS: 1 EraseBytes- 00:06:53.723 [2024-11-26 20:09:06.476655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:ff0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.476694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.476817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.476836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.476958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.476975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.723 #34 NEW cov: 12520 ft: 14866 corp: 16/426b lim: 45 exec/s: 34 rss: 74Mb L: 29/45 MS: 1 InsertRepeatedBytes- 00:06:53.723 [2024-11-26 20:09:06.546843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.546873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.546993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.547012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.547132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.547147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.723 #35 NEW cov: 12520 ft: 14878 corp: 17/457b lim: 45 exec/s: 35 rss: 74Mb L: 31/45 MS: 1 CrossOver- 00:06:53.723 [2024-11-26 20:09:06.596972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.597000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.597116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a2a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.597135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.597256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.597272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.723 #36 NEW cov: 12520 ft: 14901 corp: 18/488b lim: 45 exec/s: 36 rss: 74Mb L: 31/45 MS: 1 InsertByte- 00:06:53.723 [2024-11-26 20:09:06.647055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.647086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.647219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22008a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.647237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.723 [2024-11-26 20:09:06.647356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.723 [2024-11-26 20:09:06.647374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.982 #37 NEW cov: 12520 ft: 14958 corp: 19/522b lim: 45 exec/s: 37 rss: 75Mb L: 34/45 MS: 1 ChangeBinInt- 00:06:53.982 [2024-11-26 20:09:06.717267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.717297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.982 [2024-11-26 20:09:06.717431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.717450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.982 [2024-11-26 20:09:06.717572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f2f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.717588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.982 #38 NEW cov: 12520 ft: 14972 corp: 20/551b lim: 45 exec/s: 38 rss: 75Mb L: 29/45 MS: 1 CopyPart- 00:06:53.982 [2024-11-26 20:09:06.767124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.767151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.982 [2024-11-26 20:09:06.767276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.767293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.982 #39 NEW cov: 12520 ft: 14996 corp: 21/576b lim: 45 exec/s: 39 rss: 75Mb L: 25/45 MS: 1 InsertByte- 00:06:53.982 [2024-11-26 20:09:06.817252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.817279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.982 [2024-11-26 20:09:06.817400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a0f0f8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.817432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.982 #40 NEW cov: 12520 ft: 15018 corp: 22/596b lim: 45 exec/s: 40 rss: 75Mb L: 20/45 MS: 1 EraseBytes- 00:06:53.982 [2024-11-26 20:09:06.868046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.868076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.982 [2024-11-26 20:09:06.868202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22008a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.868224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.982 [2024-11-26 20:09:06.868346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.868363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.982 [2024-11-26 20:09:06.868482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.982 [2024-11-26 20:09:06.868501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.982 #41 NEW cov: 12520 ft: 15052 corp: 23/638b lim: 45 exec/s: 41 rss: 75Mb L: 42/45 MS: 1 InsertRepeatedBytes- 00:06:54.241 [2024-11-26 20:09:06.937931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:06.937960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:06.938086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a5d cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:06.938103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:06.938227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f8a8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:06.938244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.241 #42 NEW cov: 12520 ft: 15063 corp: 24/669b lim: 45 exec/s: 42 rss: 75Mb L: 31/45 MS: 1 InsertByte- 00:06:54.241 [2024-11-26 20:09:06.988116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:06.988145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:06.988276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a0f4f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:06.988292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:06.988416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0f0f0f8a cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:06.988433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.241 #43 NEW cov: 12520 ft: 15098 corp: 25/704b lim: 45 exec/s: 43 rss: 75Mb L: 35/45 MS: 1 CrossOver- 00:06:54.241 [2024-11-26 20:09:07.058524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f530f0f cdw11:53530002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:07.058552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:07.058682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53530002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:07.058701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:07.058825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53530000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:07.058844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:07.058962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:07.058980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.241 #44 NEW cov: 12520 ft: 15100 corp: 26/744b lim: 45 exec/s: 44 rss: 75Mb L: 40/45 MS: 1 InsertRepeatedBytes- 00:06:54.241 [2024-11-26 20:09:07.108184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f9c cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:07.108212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.241 [2024-11-26 20:09:07.108339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a0f0f cdw11:8a0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.241 [2024-11-26 20:09:07.108356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.241 #45 NEW cov: 12520 ft: 15104 corp: 27/770b lim: 45 exec/s: 45 rss: 75Mb L: 26/45 MS: 1 InsertByte- 00:06:54.500 [2024-11-26 20:09:07.178074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.500 [2024-11-26 20:09:07.178102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.500 #46 NEW cov: 12520 ft: 15793 corp: 28/786b lim: 45 exec/s: 46 rss: 75Mb L: 16/45 MS: 1 CrossOver- 00:06:54.500 [2024-11-26 20:09:07.228286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.500 [2024-11-26 20:09:07.228315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.500 #47 NEW cov: 12520 ft: 15810 corp: 29/802b lim: 45 exec/s: 47 rss: 75Mb L: 16/45 MS: 1 EraseBytes- 00:06:54.500 [2024-11-26 20:09:07.278498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f0f0f0f cdw11:0f0f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.500 [2024-11-26 20:09:07.278526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.500 #48 NEW cov: 12520 ft: 15839 corp: 30/811b lim: 45 exec/s: 24 rss: 75Mb L: 9/45 MS: 1 EraseBytes- 00:06:54.500 #48 DONE cov: 12520 ft: 15839 corp: 30/811b lim: 45 exec/s: 24 rss: 75Mb 00:06:54.500 Done 48 runs in 2 second(s) 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:54.500 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:54.759 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:54.759 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.759 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.759 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.759 20:09:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:54.759 [2024-11-26 20:09:07.464501] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:54.759 [2024-11-26 20:09:07.464575] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609681 ] 00:06:54.759 [2024-11-26 20:09:07.655876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.018 [2024-11-26 20:09:07.692948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.018 [2024-11-26 20:09:07.752215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.018 [2024-11-26 20:09:07.768549] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:55.018 INFO: Running with entropic power schedule (0xFF, 100). 00:06:55.018 INFO: Seed: 2038262201 00:06:55.018 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:55.018 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:55.018 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:55.018 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.018 #2 INITED exec/s: 0 rss: 65Mb 00:06:55.018 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.018 This may also happen if the target rejected all inputs we tried so far 00:06:55.018 [2024-11-26 20:09:07.813938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:55.018 [2024-11-26 20:09:07.813968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.018 [2024-11-26 20:09:07.814020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.018 [2024-11-26 20:09:07.814050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.277 NEW_FUNC[1/715]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:55.277 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.277 #3 NEW cov: 12210 ft: 12191 corp: 2/6b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:55.277 [2024-11-26 20:09:08.125140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:55.277 [2024-11-26 20:09:08.125180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.277 [2024-11-26 20:09:08.125239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.277 [2024-11-26 20:09:08.125257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.277 [2024-11-26 20:09:08.125329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.277 [2024-11-26 20:09:08.125343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.277 [2024-11-26 20:09:08.125392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.277 [2024-11-26 20:09:08.125405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.277 [2024-11-26 20:09:08.125453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:55.277 [2024-11-26 20:09:08.125466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.277 #4 NEW cov: 12323 ft: 13129 corp: 3/16b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:55.277 [2024-11-26 20:09:08.184825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:55.277 [2024-11-26 20:09:08.184850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.277 [2024-11-26 20:09:08.184901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.277 [2024-11-26 20:09:08.184915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.277 #5 NEW cov: 12329 ft: 13316 corp: 4/21b lim: 10 exec/s: 0 rss: 73Mb L: 5/10 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:55.536 [2024-11-26 20:09:08.224901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000600 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.224926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.224976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.224989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.536 #6 NEW cov: 12414 ft: 13626 corp: 5/26b lim: 10 exec/s: 0 rss: 73Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:55.536 [2024-11-26 20:09:08.265368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.265393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.265444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.265457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.265508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.265521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.265572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.265585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.265638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.265652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.536 #7 NEW cov: 12414 ft: 13715 corp: 6/36b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:55.536 [2024-11-26 20:09:08.325305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001a2 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.325330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.325381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.325395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.325446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.325459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.536 #8 NEW cov: 12414 ft: 14002 corp: 7/42b lim: 10 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 InsertByte- 00:06:55.536 [2024-11-26 20:09:08.385607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.385632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.385683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.385697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.385745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.385759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.385823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.385837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.536 #9 NEW cov: 12414 ft: 14058 corp: 8/51b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 CrossOver- 00:06:55.536 [2024-11-26 20:09:08.425559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.425584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.425638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.425652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.536 [2024-11-26 20:09:08.425704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:55.536 [2024-11-26 20:09:08.425717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.536 #11 NEW cov: 12414 ft: 14083 corp: 9/58b lim: 10 exec/s: 0 rss: 73Mb L: 7/10 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:55.795 [2024-11-26 20:09:08.465724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001a2 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.465749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.795 [2024-11-26 20:09:08.465801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.465815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.795 [2024-11-26 20:09:08.465868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000b5 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.465882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.795 #12 NEW cov: 12414 ft: 14197 corp: 10/65b lim: 10 exec/s: 0 rss: 74Mb L: 7/10 MS: 1 InsertByte- 00:06:55.795 [2024-11-26 20:09:08.525785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.525809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.795 [2024-11-26 20:09:08.525861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000020 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.525874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.795 #13 NEW cov: 12414 ft: 14237 corp: 11/70b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 ChangeBit- 00:06:55.795 [2024-11-26 20:09:08.565949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.565974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.795 [2024-11-26 20:09:08.566025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.566038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.795 [2024-11-26 20:09:08.566088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.566101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.795 #14 NEW cov: 12414 ft: 14261 corp: 12/77b lim: 10 exec/s: 0 rss: 74Mb L: 7/10 MS: 1 CopyPart- 00:06:55.795 [2024-11-26 20:09:08.626039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.626063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.795 [2024-11-26 20:09:08.626111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.626124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.795 #15 NEW cov: 12414 ft: 14279 corp: 13/82b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:55.795 [2024-11-26 20:09:08.666095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.666120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.795 [2024-11-26 20:09:08.666171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b50a cdw11:00000000 00:06:55.795 [2024-11-26 20:09:08.666184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.795 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:55.795 #16 NEW cov: 12437 ft: 14345 corp: 14/86b lim: 10 exec/s: 0 rss: 74Mb L: 4/10 MS: 1 EraseBytes- 00:06:56.055 [2024-11-26 20:09:08.726573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.726602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.726654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004401 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.726670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.726719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.726732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.726785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.726798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.055 #17 NEW cov: 12437 ft: 14374 corp: 15/94b lim: 10 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:06:56.055 [2024-11-26 20:09:08.766254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a44 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.766278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.055 #22 NEW cov: 12437 ft: 14585 corp: 16/97b lim: 10 exec/s: 0 rss: 74Mb L: 3/10 MS: 5 ChangeBit-CopyPart-CopyPart-ShuffleBytes-CrossOver- 00:06:56.055 [2024-11-26 20:09:08.806508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000500 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.806534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.806605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.806620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.055 #23 NEW cov: 12437 ft: 14595 corp: 17/102b lim: 10 exec/s: 23 rss: 74Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:56.055 [2024-11-26 20:09:08.846738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.846762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.846816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b500 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.846829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.846880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000aa2 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.846894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.055 #24 NEW cov: 12437 ft: 14622 corp: 18/109b lim: 10 exec/s: 24 rss: 74Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:56.055 [2024-11-26 20:09:08.886963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004644 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.886990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.887042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004401 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.887057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.887107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.887122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.887178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.887193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.055 #25 NEW cov: 12437 ft: 14640 corp: 19/117b lim: 10 exec/s: 25 rss: 74Mb L: 8/10 MS: 1 ChangeBit- 00:06:56.055 [2024-11-26 20:09:08.947139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.947163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.947217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.947230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.947280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.947308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.055 [2024-11-26 20:09:08.947361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000200a cdw11:00000000 00:06:56.055 [2024-11-26 20:09:08.947374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.314 #26 NEW cov: 12437 ft: 14654 corp: 20/125b lim: 10 exec/s: 26 rss: 74Mb L: 8/10 MS: 1 CopyPart- 00:06:56.314 [2024-11-26 20:09:09.007186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000500 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.007210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.007262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.007275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.007327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000aff cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.007340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.314 #27 NEW cov: 12437 ft: 14665 corp: 21/131b lim: 10 exec/s: 27 rss: 74Mb L: 6/10 MS: 1 InsertByte- 00:06:56.314 [2024-11-26 20:09:09.067452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.067476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.067527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.067541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.067592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a88 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.067610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.067662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008888 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.067674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.314 #28 NEW cov: 12437 ft: 14675 corp: 22/139b lim: 10 exec/s: 28 rss: 74Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:06:56.314 [2024-11-26 20:09:09.107579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.107609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.107661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.107674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.107725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a7b cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.107738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.107789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.107801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.314 #29 NEW cov: 12437 ft: 14696 corp: 23/147b lim: 10 exec/s: 29 rss: 74Mb L: 8/10 MS: 1 CrossOver- 00:06:56.314 [2024-11-26 20:09:09.167632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.167658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.167726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.167739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.167792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.167806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.314 #30 NEW cov: 12437 ft: 14710 corp: 24/153b lim: 10 exec/s: 30 rss: 74Mb L: 6/10 MS: 1 InsertByte- 00:06:56.314 [2024-11-26 20:09:09.227696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.227721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.314 [2024-11-26 20:09:09.227773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.314 [2024-11-26 20:09:09.227787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.573 #31 NEW cov: 12437 ft: 14712 corp: 25/158b lim: 10 exec/s: 31 rss: 75Mb L: 5/10 MS: 1 EraseBytes- 00:06:56.573 [2024-11-26 20:09:09.287851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.287876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.573 [2024-11-26 20:09:09.287929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000200a cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.287943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.573 #32 NEW cov: 12437 ft: 14793 corp: 26/163b lim: 10 exec/s: 32 rss: 75Mb L: 5/10 MS: 1 CrossOver- 00:06:56.573 [2024-11-26 20:09:09.327945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000501 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.327970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.573 [2024-11-26 20:09:09.328026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.328039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.573 #33 NEW cov: 12437 ft: 14838 corp: 27/168b lim: 10 exec/s: 33 rss: 75Mb L: 5/10 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:56.573 [2024-11-26 20:09:09.368412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000501 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.368437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.573 [2024-11-26 20:09:09.368488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.368501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.573 [2024-11-26 20:09:09.368550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.368564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.573 [2024-11-26 20:09:09.368615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.368627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.573 [2024-11-26 20:09:09.368676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.573 [2024-11-26 20:09:09.368689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.573 #34 NEW cov: 12437 ft: 14843 corp: 28/178b lim: 10 exec/s: 34 rss: 75Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:56.573 [2024-11-26 20:09:09.428490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.428516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.574 [2024-11-26 20:09:09.428569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.428583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.574 [2024-11-26 20:09:09.428636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.428650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.574 [2024-11-26 20:09:09.428699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.428712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.574 #35 NEW cov: 12437 ft: 14861 corp: 29/186b lim: 10 exec/s: 35 rss: 75Mb L: 8/10 MS: 1 ShuffleBytes- 00:06:56.574 [2024-11-26 20:09:09.488633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001a2 cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.488659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.574 [2024-11-26 20:09:09.488711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.488725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.574 [2024-11-26 20:09:09.488777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000021 cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.488790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.574 [2024-11-26 20:09:09.488857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000b50a cdw11:00000000 00:06:56.574 [2024-11-26 20:09:09.488871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.832 #36 NEW cov: 12437 ft: 14865 corp: 30/194b lim: 10 exec/s: 36 rss: 75Mb L: 8/10 MS: 1 InsertByte- 00:06:56.832 [2024-11-26 20:09:09.528799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.528824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.528901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002a cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.528915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.528965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.528978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.529028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008888 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.529041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.832 #37 NEW cov: 12437 ft: 14921 corp: 31/203b lim: 10 exec/s: 37 rss: 75Mb L: 9/10 MS: 1 InsertByte- 00:06:56.832 [2024-11-26 20:09:09.589074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.589098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.589151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.589164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.589216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000040 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.589229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.589279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.589307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.589356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.589370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.832 #38 NEW cov: 12437 ft: 14940 corp: 32/213b lim: 10 exec/s: 38 rss: 75Mb L: 10/10 MS: 1 ChangeBit- 00:06:56.832 [2024-11-26 20:09:09.628869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000501 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.628893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.628961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.628978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.832 #39 NEW cov: 12437 ft: 14997 corp: 33/218b lim: 10 exec/s: 39 rss: 75Mb L: 5/10 MS: 1 CrossOver- 00:06:56.832 [2024-11-26 20:09:09.668840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0f cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.668864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.832 #40 NEW cov: 12437 ft: 15015 corp: 34/220b lim: 10 exec/s: 40 rss: 75Mb L: 2/10 MS: 1 InsertByte- 00:06:56.832 [2024-11-26 20:09:09.709284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.709309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.709360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.832 [2024-11-26 20:09:09.709373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.832 [2024-11-26 20:09:09.709425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.833 [2024-11-26 20:09:09.709438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.833 [2024-11-26 20:09:09.709488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.833 [2024-11-26 20:09:09.709501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.833 #41 NEW cov: 12437 ft: 15054 corp: 35/229b lim: 10 exec/s: 41 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:56.833 [2024-11-26 20:09:09.749185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:56.833 [2024-11-26 20:09:09.749210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.833 [2024-11-26 20:09:09.749262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:56.833 [2024-11-26 20:09:09.749275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.092 #42 NEW cov: 12437 ft: 15068 corp: 36/233b lim: 10 exec/s: 42 rss: 75Mb L: 4/10 MS: 1 EraseBytes- 00:06:57.092 [2024-11-26 20:09:09.789401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000501 cdw11:00000000 00:06:57.092 [2024-11-26 20:09:09.789426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.092 [2024-11-26 20:09:09.789479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.092 [2024-11-26 20:09:09.789492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.092 [2024-11-26 20:09:09.789543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000003b cdw11:00000000 00:06:57.092 [2024-11-26 20:09:09.789556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.092 #43 NEW cov: 12437 ft: 15115 corp: 37/239b lim: 10 exec/s: 21 rss: 75Mb L: 6/10 MS: 1 InsertByte- 00:06:57.092 #43 DONE cov: 12437 ft: 15115 corp: 37/239b lim: 10 exec/s: 21 rss: 75Mb 00:06:57.092 ###### Recommended dictionary. ###### 00:06:57.092 "\001\000\000\000" # Uses: 3 00:06:57.092 ###### End of recommended dictionary. ###### 00:06:57.092 Done 43 runs in 2 second(s) 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:57.092 20:09:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:57.092 [2024-11-26 20:09:09.956424] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:57.092 [2024-11-26 20:09:09.956494] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610211 ] 00:06:57.350 [2024-11-26 20:09:10.147169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.350 [2024-11-26 20:09:10.186312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.350 [2024-11-26 20:09:10.245724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.350 [2024-11-26 20:09:10.262074] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:57.350 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.350 INFO: Seed: 237296610 00:06:57.608 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:57.608 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:57.608 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:57.608 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.608 #2 INITED exec/s: 0 rss: 66Mb 00:06:57.608 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.608 This may also happen if the target rejected all inputs we tried so far 00:06:57.608 [2024-11-26 20:09:10.327442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000240a cdw11:00000000 00:06:57.608 [2024-11-26 20:09:10.327469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.866 NEW_FUNC[1/715]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:57.866 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.866 #8 NEW cov: 12192 ft: 12192 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:06:57.866 [2024-11-26 20:09:10.658348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d80a cdw11:00000000 00:06:57.866 [2024-11-26 20:09:10.658394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.866 #12 NEW cov: 12322 ft: 12817 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 4 ChangeByte-ChangeByte-CrossOver-InsertByte- 00:06:57.866 [2024-11-26 20:09:10.698251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002424 cdw11:00000000 00:06:57.866 [2024-11-26 20:09:10.698277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.866 #13 NEW cov: 12328 ft: 12996 corp: 4/8b lim: 10 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CopyPart- 00:06:57.866 [2024-11-26 20:09:10.758765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.866 [2024-11-26 20:09:10.758791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.866 [2024-11-26 20:09:10.758840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.866 [2024-11-26 20:09:10.758853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.866 [2024-11-26 20:09:10.758901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.866 [2024-11-26 20:09:10.758915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.866 [2024-11-26 20:09:10.758961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.866 [2024-11-26 20:09:10.758974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.124 #15 NEW cov: 12413 ft: 13599 corp: 5/17b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:58.124 [2024-11-26 20:09:10.818581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.818610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.124 #17 NEW cov: 12413 ft: 13767 corp: 6/19b lim: 10 exec/s: 0 rss: 73Mb L: 2/9 MS: 2 CopyPart-CopyPart- 00:06:58.124 [2024-11-26 20:09:10.859046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.859070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.859121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.859135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.859184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000900 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.859197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.859245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.859257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.124 #18 NEW cov: 12413 ft: 13840 corp: 7/28b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:58.124 [2024-11-26 20:09:10.919239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.919264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.919331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.919345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.919393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.919407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.919456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.919469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.124 #19 NEW cov: 12413 ft: 14008 corp: 8/37b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:06:58.124 [2024-11-26 20:09:10.959277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.959302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.959351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.959363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.959410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.959423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.124 [2024-11-26 20:09:10.959469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:10.959482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.124 #20 NEW cov: 12413 ft: 14070 corp: 9/46b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:58.124 [2024-11-26 20:09:11.019154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:06:58.124 [2024-11-26 20:09:11.019179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.124 #22 NEW cov: 12413 ft: 14098 corp: 10/48b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 2 ChangeByte-CopyPart- 00:06:58.382 [2024-11-26 20:09:11.059368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000240a cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.059392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.382 [2024-11-26 20:09:11.059440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d80a cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.059453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.382 #23 NEW cov: 12413 ft: 14307 corp: 11/52b lim: 10 exec/s: 0 rss: 74Mb L: 4/9 MS: 1 CrossOver- 00:06:58.382 [2024-11-26 20:09:11.099362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003c0a cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.099387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.382 #24 NEW cov: 12413 ft: 14402 corp: 12/54b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 InsertByte- 00:06:58.382 [2024-11-26 20:09:11.139456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003c0a cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.139481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.382 #25 NEW cov: 12413 ft: 14427 corp: 13/56b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:58.382 [2024-11-26 20:09:11.199672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dcfe cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.199697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.382 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:58.382 #26 NEW cov: 12436 ft: 14488 corp: 14/58b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ChangeBinInt- 00:06:58.382 [2024-11-26 20:09:11.240195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.240219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.382 [2024-11-26 20:09:11.240286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.240299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.382 [2024-11-26 20:09:11.240348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.240361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.382 [2024-11-26 20:09:11.240409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.382 [2024-11-26 20:09:11.240422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.383 [2024-11-26 20:09:11.240471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000000d8 cdw11:00000000 00:06:58.383 [2024-11-26 20:09:11.240484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.383 #27 NEW cov: 12436 ft: 14531 corp: 15/68b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:58.383 [2024-11-26 20:09:11.279860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dcfe cdw11:00000000 00:06:58.383 [2024-11-26 20:09:11.279884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.640 #28 NEW cov: 12436 ft: 14551 corp: 16/70b lim: 10 exec/s: 28 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:58.640 [2024-11-26 20:09:11.340025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.340048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.641 #29 NEW cov: 12436 ft: 14560 corp: 17/72b lim: 10 exec/s: 29 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:58.641 [2024-11-26 20:09:11.400249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000240a cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.400272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.641 #30 NEW cov: 12436 ft: 14591 corp: 18/74b lim: 10 exec/s: 30 rss: 74Mb L: 2/10 MS: 1 EraseBytes- 00:06:58.641 [2024-11-26 20:09:11.460400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.460427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.641 #31 NEW cov: 12436 ft: 14603 corp: 19/76b lim: 10 exec/s: 31 rss: 74Mb L: 2/10 MS: 1 CrossOver- 00:06:58.641 [2024-11-26 20:09:11.521043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.521068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.641 [2024-11-26 20:09:11.521119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.521133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.641 [2024-11-26 20:09:11.521181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.521194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.641 [2024-11-26 20:09:11.521241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.521254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.641 [2024-11-26 20:09:11.521304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.641 [2024-11-26 20:09:11.521317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.641 #32 NEW cov: 12436 ft: 14617 corp: 20/86b lim: 10 exec/s: 32 rss: 74Mb L: 10/10 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:06:58.899 [2024-11-26 20:09:11.580706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dc00 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.580729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.899 #33 NEW cov: 12436 ft: 14618 corp: 21/88b lim: 10 exec/s: 33 rss: 75Mb L: 2/10 MS: 1 CrossOver- 00:06:58.899 [2024-11-26 20:09:11.641215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.641238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.899 [2024-11-26 20:09:11.641304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008000 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.641317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.899 [2024-11-26 20:09:11.641368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.641382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.899 [2024-11-26 20:09:11.641432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.641446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.899 #34 NEW cov: 12436 ft: 14658 corp: 22/97b lim: 10 exec/s: 34 rss: 75Mb L: 9/10 MS: 1 ChangeBit- 00:06:58.899 [2024-11-26 20:09:11.681221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.681245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.899 [2024-11-26 20:09:11.681310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.681328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.899 [2024-11-26 20:09:11.681381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000900 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.681394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.899 #35 NEW cov: 12436 ft: 14802 corp: 23/103b lim: 10 exec/s: 35 rss: 75Mb L: 6/10 MS: 1 EraseBytes- 00:06:58.899 [2024-11-26 20:09:11.741161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d803 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.741185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.899 #36 NEW cov: 12436 ft: 14811 corp: 24/105b lim: 10 exec/s: 36 rss: 75Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:58.899 [2024-11-26 20:09:11.781274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002929 cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.781297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.899 #41 NEW cov: 12436 ft: 14857 corp: 25/107b lim: 10 exec/s: 41 rss: 75Mb L: 2/10 MS: 5 EraseBytes-ChangeByte-ChangeByte-ShuffleBytes-CopyPart- 00:06:58.899 [2024-11-26 20:09:11.821406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000da0a cdw11:00000000 00:06:58.899 [2024-11-26 20:09:11.821430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.157 #42 NEW cov: 12436 ft: 14865 corp: 26/109b lim: 10 exec/s: 42 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:06:59.157 [2024-11-26 20:09:11.861945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.157 [2024-11-26 20:09:11.861970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.157 [2024-11-26 20:09:11.862037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.157 [2024-11-26 20:09:11.862051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.157 [2024-11-26 20:09:11.862100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.157 [2024-11-26 20:09:11.862113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.157 [2024-11-26 20:09:11.862163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.157 [2024-11-26 20:09:11.862176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.157 [2024-11-26 20:09:11.862227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:59.157 [2024-11-26 20:09:11.862240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.157 #43 NEW cov: 12436 ft: 14877 corp: 27/119b lim: 10 exec/s: 43 rss: 75Mb L: 10/10 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:06:59.157 [2024-11-26 20:09:11.921674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000da0a cdw11:00000000 00:06:59.157 [2024-11-26 20:09:11.921698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.157 #44 NEW cov: 12436 ft: 14883 corp: 28/121b lim: 10 exec/s: 44 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:06:59.157 [2024-11-26 20:09:11.961793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000094 cdw11:00000000 00:06:59.157 [2024-11-26 20:09:11.961819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.157 #45 NEW cov: 12436 ft: 14895 corp: 29/123b lim: 10 exec/s: 45 rss: 75Mb L: 2/10 MS: 1 ChangeByte- 00:06:59.157 [2024-11-26 20:09:12.001890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dcfe cdw11:00000000 00:06:59.157 [2024-11-26 20:09:12.001915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.157 #46 NEW cov: 12436 ft: 15059 corp: 30/125b lim: 10 exec/s: 46 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:59.157 [2024-11-26 20:09:12.041989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002c7a cdw11:00000000 00:06:59.157 [2024-11-26 20:09:12.042013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.157 #51 NEW cov: 12436 ft: 15066 corp: 31/127b lim: 10 exec/s: 51 rss: 75Mb L: 2/10 MS: 5 EraseBytes-ShuffleBytes-ChangeByte-ChangeByte-InsertByte- 00:06:59.415 [2024-11-26 20:09:12.102555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.102579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.415 [2024-11-26 20:09:12.102660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.102673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.415 [2024-11-26 20:09:12.102720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.102733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.415 [2024-11-26 20:09:12.102782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.102795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.415 #52 NEW cov: 12436 ft: 15077 corp: 32/136b lim: 10 exec/s: 52 rss: 75Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:59.415 [2024-11-26 20:09:12.142295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003c0a cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.142319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.415 #53 NEW cov: 12436 ft: 15085 corp: 33/138b lim: 10 exec/s: 53 rss: 75Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:59.415 [2024-11-26 20:09:12.182441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dc5b cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.182465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.415 #54 NEW cov: 12436 ft: 15107 corp: 34/141b lim: 10 exec/s: 54 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:06:59.415 [2024-11-26 20:09:12.222738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.222763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.415 [2024-11-26 20:09:12.222827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.222841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.415 [2024-11-26 20:09:12.222889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f700 cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.222906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.415 #55 NEW cov: 12436 ft: 15112 corp: 35/147b lim: 10 exec/s: 55 rss: 75Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:59.415 [2024-11-26 20:09:12.282682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003c8a cdw11:00000000 00:06:59.415 [2024-11-26 20:09:12.282707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.415 #56 NEW cov: 12436 ft: 15132 corp: 36/149b lim: 10 exec/s: 28 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:06:59.415 #56 DONE cov: 12436 ft: 15132 corp: 36/149b lim: 10 exec/s: 28 rss: 75Mb 00:06:59.415 ###### Recommended dictionary. ###### 00:06:59.415 "\377\377\377\377\377\377\377\000" # Uses: 1 00:06:59.415 ###### End of recommended dictionary. ###### 00:06:59.415 Done 56 runs in 2 second(s) 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.674 20:09:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:59.674 [2024-11-26 20:09:12.469380] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:06:59.674 [2024-11-26 20:09:12.469461] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610609 ] 00:06:59.932 [2024-11-26 20:09:12.668565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.932 [2024-11-26 20:09:12.702664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.932 [2024-11-26 20:09:12.762049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.932 [2024-11-26 20:09:12.778388] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:59.932 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.932 INFO: Seed: 2753287596 00:06:59.932 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:06:59.932 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:06:59.932 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:59.932 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.932 [2024-11-26 20:09:12.823249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.932 [2024-11-26 20:09:12.823285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.932 #2 INITED cov: 12237 ft: 12229 corp: 1/1b exec/s: 0 rss: 71Mb 00:07:00.189 [2024-11-26 20:09:12.873442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.873475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:12.873510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.873526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:12.873557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.873573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:12.873610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.873626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.190 #3 NEW cov: 12350 ft: 13571 corp: 2/5b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:00.190 [2024-11-26 20:09:12.963761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.963792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:12.963840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.963856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:12.963886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.963902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:12.963931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.963947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:12.963977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:12.963992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.190 #4 NEW cov: 12356 ft: 14026 corp: 3/10b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:00.190 [2024-11-26 20:09:13.053918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:13.053949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:13.053984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:13.054000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:13.054030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:13.054046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:13.054076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:13.054092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.190 [2024-11-26 20:09:13.054121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.190 [2024-11-26 20:09:13.054137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.448 #5 NEW cov: 12441 ft: 14251 corp: 4/15b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:07:00.448 [2024-11-26 20:09:13.144135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.144167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.144201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.144218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.144247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.144264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.144293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.144308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.448 #6 NEW cov: 12441 ft: 14298 corp: 5/19b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeBit- 00:07:00.448 [2024-11-26 20:09:13.204328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.204361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.204412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.204429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.204464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.204481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.204511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.204527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.204558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.204574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.448 #7 NEW cov: 12441 ft: 14379 corp: 6/24b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:00.448 [2024-11-26 20:09:13.294595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.294647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.294681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.448 [2024-11-26 20:09:13.294697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.448 [2024-11-26 20:09:13.294728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.449 [2024-11-26 20:09:13.294743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.449 [2024-11-26 20:09:13.294772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.449 [2024-11-26 20:09:13.294788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.449 [2024-11-26 20:09:13.294833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.449 [2024-11-26 20:09:13.294849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.449 #8 NEW cov: 12441 ft: 14453 corp: 7/29b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:00.449 [2024-11-26 20:09:13.354659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.449 [2024-11-26 20:09:13.354690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.449 [2024-11-26 20:09:13.354724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.449 [2024-11-26 20:09:13.354741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.449 [2024-11-26 20:09:13.354771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.449 [2024-11-26 20:09:13.354787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.449 [2024-11-26 20:09:13.354816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.449 [2024-11-26 20:09:13.354836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.707 #9 NEW cov: 12441 ft: 14499 corp: 8/33b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeByte- 00:07:00.707 [2024-11-26 20:09:13.414900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.414930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.707 [2024-11-26 20:09:13.414979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.414995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.707 [2024-11-26 20:09:13.415025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.415041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.707 [2024-11-26 20:09:13.415070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.415086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.707 [2024-11-26 20:09:13.415115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.415130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.707 #10 NEW cov: 12441 ft: 14541 corp: 9/38b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:00.707 [2024-11-26 20:09:13.505104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.505134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.707 [2024-11-26 20:09:13.505183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.505200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.707 [2024-11-26 20:09:13.505230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.707 [2024-11-26 20:09:13.505245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.708 [2024-11-26 20:09:13.505275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.708 [2024-11-26 20:09:13.505291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.708 [2024-11-26 20:09:13.505320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.708 [2024-11-26 20:09:13.505336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.708 #11 NEW cov: 12441 ft: 14658 corp: 10/43b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:07:00.708 [2024-11-26 20:09:13.595393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.708 [2024-11-26 20:09:13.595425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.708 [2024-11-26 20:09:13.595460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.708 [2024-11-26 20:09:13.595476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.708 [2024-11-26 20:09:13.595506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.708 [2024-11-26 20:09:13.595523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.708 [2024-11-26 20:09:13.595553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.708 [2024-11-26 20:09:13.595570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.708 [2024-11-26 20:09:13.595606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.708 [2024-11-26 20:09:13.595638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.966 #12 NEW cov: 12441 ft: 14696 corp: 11/48b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:07:00.966 [2024-11-26 20:09:13.695620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.966 [2024-11-26 20:09:13.695666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.966 [2024-11-26 20:09:13.695700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.966 [2024-11-26 20:09:13.695716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.966 [2024-11-26 20:09:13.695746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.966 [2024-11-26 20:09:13.695762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.966 [2024-11-26 20:09:13.695792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.966 [2024-11-26 20:09:13.695807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.966 [2024-11-26 20:09:13.695836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.966 [2024-11-26 20:09:13.695851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.223 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:01.223 #13 NEW cov: 12464 ft: 14780 corp: 12/53b lim: 5 exec/s: 13 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:07:01.223 [2024-11-26 20:09:14.016512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.223 [2024-11-26 20:09:14.016551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.016612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.016630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.016660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.016676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.016721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.016738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.016768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.016784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.224 #14 NEW cov: 12464 ft: 14801 corp: 13/58b lim: 5 exec/s: 14 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:07:01.224 [2024-11-26 20:09:14.076540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.076571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.076647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.076664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.076694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.076710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.076739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.076755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.076785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.076801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.224 #15 NEW cov: 12464 ft: 14824 corp: 14/63b lim: 5 exec/s: 15 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:01.224 [2024-11-26 20:09:14.136653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.136684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.136733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.136749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.136784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.136801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.224 [2024-11-26 20:09:14.136831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.224 [2024-11-26 20:09:14.136846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.481 #16 NEW cov: 12464 ft: 14885 corp: 15/67b lim: 5 exec/s: 16 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:07:01.481 [2024-11-26 20:09:14.236918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.236949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.237001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.237018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.237049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.237067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.481 #17 NEW cov: 12464 ft: 15138 corp: 16/70b lim: 5 exec/s: 17 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:07:01.481 [2024-11-26 20:09:14.327171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.327200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.327249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.327265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.327295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.327311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.327340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.327356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.481 #18 NEW cov: 12464 ft: 15160 corp: 17/74b lim: 5 exec/s: 18 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:07:01.481 [2024-11-26 20:09:14.387403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.387433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.387468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.387488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.387518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.387534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.387564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.387579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.481 [2024-11-26 20:09:14.387615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.481 [2024-11-26 20:09:14.387632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.740 #19 NEW cov: 12464 ft: 15174 corp: 18/79b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:07:01.740 [2024-11-26 20:09:14.477440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.477470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.740 [2024-11-26 20:09:14.477518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.477534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.740 #20 NEW cov: 12464 ft: 15357 corp: 19/81b lim: 5 exec/s: 20 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:07:01.740 [2024-11-26 20:09:14.567888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.567919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.740 [2024-11-26 20:09:14.567953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.567969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.740 [2024-11-26 20:09:14.567999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.568015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.740 [2024-11-26 20:09:14.568044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.568060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.740 [2024-11-26 20:09:14.568089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.568105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.740 #21 NEW cov: 12464 ft: 15395 corp: 20/86b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:07:01.740 [2024-11-26 20:09:14.668097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.668133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.740 [2024-11-26 20:09:14.668169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.668186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.740 [2024-11-26 20:09:14.668217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.740 [2024-11-26 20:09:14.668233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.999 #22 NEW cov: 12464 ft: 15415 corp: 21/89b lim: 5 exec/s: 22 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:07:01.999 [2024-11-26 20:09:14.728296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.728327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.728361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.728378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.728407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.728423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.728452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.728467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.728496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.728512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.999 #23 NEW cov: 12464 ft: 15432 corp: 22/94b lim: 5 exec/s: 23 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:01.999 [2024-11-26 20:09:14.818505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.818535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.818583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.818606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.818640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.818657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.818686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.818705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.999 [2024-11-26 20:09:14.818735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.999 [2024-11-26 20:09:14.818751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.999 #24 NEW cov: 12464 ft: 15470 corp: 23/99b lim: 5 exec/s: 12 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:07:01.999 #24 DONE cov: 12464 ft: 15470 corp: 23/99b lim: 5 exec/s: 12 rss: 74Mb 00:07:01.999 Done 24 runs in 2 second(s) 00:07:02.257 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:02.257 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:02.258 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:02.258 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:02.258 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:02.258 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:02.258 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:02.258 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:02.258 20:09:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:02.258 20:09:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:02.258 [2024-11-26 20:09:15.041163] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:02.258 [2024-11-26 20:09:15.041236] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611037 ] 00:07:02.516 [2024-11-26 20:09:15.230229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.516 [2024-11-26 20:09:15.264426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.516 [2024-11-26 20:09:15.323497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.516 [2024-11-26 20:09:15.339846] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:02.516 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.516 INFO: Seed: 1020339075 00:07:02.516 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:02.516 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:02.516 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:02.516 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.516 [2024-11-26 20:09:15.406261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.516 [2024-11-26 20:09:15.406299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.516 #2 INITED cov: 12237 ft: 12238 corp: 1/1b exec/s: 0 rss: 72Mb 00:07:02.774 [2024-11-26 20:09:15.457506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.774 [2024-11-26 20:09:15.457537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.774 [2024-11-26 20:09:15.457672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.457691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.457824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.457840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.457977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.457995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.458131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.458151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.775 #3 NEW cov: 12350 ft: 13598 corp: 2/6b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:02.775 [2024-11-26 20:09:15.527761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.527793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.527925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.527944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.528069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.528088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.528207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.528225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.528352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.528369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.775 #4 NEW cov: 12356 ft: 13674 corp: 3/11b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:07:02.775 [2024-11-26 20:09:15.596814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.596845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.775 #5 NEW cov: 12441 ft: 14156 corp: 4/12b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:07:02.775 [2024-11-26 20:09:15.647174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.647205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.775 [2024-11-26 20:09:15.647344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.647363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.775 #6 NEW cov: 12441 ft: 14461 corp: 5/14b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:07:02.775 [2024-11-26 20:09:15.696969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.775 [2024-11-26 20:09:15.696999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.033 #7 NEW cov: 12441 ft: 14695 corp: 6/15b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:07:03.033 [2024-11-26 20:09:15.768365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.768395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.768535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.768553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.768690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.768709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.768852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.768870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.768993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.769011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.033 #8 NEW cov: 12441 ft: 14764 corp: 7/20b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:03.033 [2024-11-26 20:09:15.817448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.817477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.033 #9 NEW cov: 12441 ft: 14831 corp: 8/21b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:07:03.033 [2024-11-26 20:09:15.888422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.888451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.888588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.888610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.888744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.888771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.888894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.888913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.033 #10 NEW cov: 12441 ft: 14849 corp: 9/25b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:03.033 [2024-11-26 20:09:15.939051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.939080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.939208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.939226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.939352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.939368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.939504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.939522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.033 [2024-11-26 20:09:15.939639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.033 [2024-11-26 20:09:15.939658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.033 #11 NEW cov: 12441 ft: 14979 corp: 10/30b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:03.292 [2024-11-26 20:09:15.987915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:15.987943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.292 #12 NEW cov: 12441 ft: 15026 corp: 11/31b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:07:03.292 [2024-11-26 20:09:16.039155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.039185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.039317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.039335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.039461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.039479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.039617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.039646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.039778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.039796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.292 #13 NEW cov: 12441 ft: 15061 corp: 12/36b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:03.292 [2024-11-26 20:09:16.089352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.089381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.089519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.089538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.089674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.089691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.089821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.089841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.089972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.089992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.292 #14 NEW cov: 12441 ft: 15115 corp: 13/41b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "\002\000\000\000"- 00:07:03.292 [2024-11-26 20:09:16.159620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.159648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.159784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.159807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.159937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.159955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.160076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.160094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.160222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.160239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.292 #15 NEW cov: 12441 ft: 15158 corp: 14/46b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:03.292 [2024-11-26 20:09:16.209861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.209890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.210022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.210041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.210161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.210179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.210305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.210322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.292 [2024-11-26 20:09:16.210453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.292 [2024-11-26 20:09:16.210471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.550 #16 NEW cov: 12441 ft: 15168 corp: 15/51b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:07:03.550 [2024-11-26 20:09:16.279673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.550 [2024-11-26 20:09:16.279701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.550 [2024-11-26 20:09:16.279819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.550 [2024-11-26 20:09:16.279836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.550 [2024-11-26 20:09:16.279967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.550 [2024-11-26 20:09:16.279987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.550 [2024-11-26 20:09:16.280136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.550 [2024-11-26 20:09:16.280156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.808 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:03.808 #17 NEW cov: 12464 ft: 15204 corp: 16/55b lim: 5 exec/s: 17 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:07:03.808 [2024-11-26 20:09:16.610305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.610340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.808 [2024-11-26 20:09:16.610468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.610485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.808 [2024-11-26 20:09:16.610602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.610619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.808 [2024-11-26 20:09:16.610740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.610757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.808 #18 NEW cov: 12464 ft: 15280 corp: 17/59b lim: 5 exec/s: 18 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:07:03.808 [2024-11-26 20:09:16.680739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.680770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.808 [2024-11-26 20:09:16.680905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.680923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.808 [2024-11-26 20:09:16.681052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.681070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.808 [2024-11-26 20:09:16.681197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.681215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.808 [2024-11-26 20:09:16.681337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.808 [2024-11-26 20:09:16.681355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.808 #19 NEW cov: 12464 ft: 15305 corp: 18/64b lim: 5 exec/s: 19 rss: 75Mb L: 5/5 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:04.067 [2024-11-26 20:09:16.739888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.739921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.067 #20 NEW cov: 12464 ft: 15311 corp: 19/65b lim: 5 exec/s: 20 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:04.067 [2024-11-26 20:09:16.810892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.810923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.811043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.811061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.811186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.811205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.811322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.811340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.067 #21 NEW cov: 12464 ft: 15345 corp: 20/69b lim: 5 exec/s: 21 rss: 75Mb L: 4/5 MS: 1 CopyPart- 00:07:04.067 [2024-11-26 20:09:16.850417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.850448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.850562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.850580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.067 #22 NEW cov: 12464 ft: 15375 corp: 21/71b lim: 5 exec/s: 22 rss: 75Mb L: 2/5 MS: 1 ChangeByte- 00:07:04.067 [2024-11-26 20:09:16.891373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.891402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.891519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.891538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.891676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.891695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.891810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.891829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.067 [2024-11-26 20:09:16.891954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.891973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.067 #23 NEW cov: 12464 ft: 15444 corp: 22/76b lim: 5 exec/s: 23 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:07:04.067 [2024-11-26 20:09:16.940463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.067 [2024-11-26 20:09:16.940492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.067 #24 NEW cov: 12464 ft: 15460 corp: 23/77b lim: 5 exec/s: 24 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:07:04.326 [2024-11-26 20:09:17.011704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.011732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.011855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.011872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.011985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.012001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.012111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.012129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.012241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.012257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.326 #25 NEW cov: 12464 ft: 15497 corp: 24/82b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:07:04.326 [2024-11-26 20:09:17.051809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.051838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.051959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.051977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.052101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.052117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.052236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.052253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.052376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.052393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.326 #26 NEW cov: 12464 ft: 15504 corp: 25/87b lim: 5 exec/s: 26 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:07:04.326 [2024-11-26 20:09:17.090905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.090933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.326 #27 NEW cov: 12464 ft: 15538 corp: 26/88b lim: 5 exec/s: 27 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:04.326 [2024-11-26 20:09:17.131992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.132020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.132138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.132155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.132278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.132296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.132416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.132433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.132551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.132569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.326 #28 NEW cov: 12464 ft: 15545 corp: 27/93b lim: 5 exec/s: 28 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:07:04.326 [2024-11-26 20:09:17.191371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.191400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.191522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.191539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.326 #29 NEW cov: 12464 ft: 15609 corp: 28/95b lim: 5 exec/s: 29 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:07:04.326 [2024-11-26 20:09:17.232232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.232262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.232391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.326 [2024-11-26 20:09:17.232415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.326 [2024-11-26 20:09:17.232537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.327 [2024-11-26 20:09:17.232554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.327 [2024-11-26 20:09:17.232680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.327 [2024-11-26 20:09:17.232698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.327 [2024-11-26 20:09:17.232816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.327 [2024-11-26 20:09:17.232833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.585 #30 NEW cov: 12464 ft: 15672 corp: 29/100b lim: 5 exec/s: 30 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:04.585 [2024-11-26 20:09:17.272420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.272449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.272567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.272585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.272715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.272733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.272853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.272871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.272995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.273014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.585 #31 NEW cov: 12464 ft: 15680 corp: 30/105b lim: 5 exec/s: 31 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:07:04.585 [2024-11-26 20:09:17.312492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.312526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.312648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.312666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.312786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.312806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.312928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.312948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.585 [2024-11-26 20:09:17.313073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.313091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.585 #32 NEW cov: 12464 ft: 15696 corp: 31/110b lim: 5 exec/s: 32 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:07:04.585 [2024-11-26 20:09:17.372298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.585 [2024-11-26 20:09:17.372326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.586 [2024-11-26 20:09:17.372446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.586 [2024-11-26 20:09:17.372465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.586 [2024-11-26 20:09:17.372588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.586 [2024-11-26 20:09:17.372610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.586 [2024-11-26 20:09:17.372739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.586 [2024-11-26 20:09:17.372756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.586 #33 NEW cov: 12464 ft: 15702 corp: 32/114b lim: 5 exec/s: 16 rss: 75Mb L: 4/5 MS: 1 ChangeBit- 00:07:04.586 #33 DONE cov: 12464 ft: 15702 corp: 32/114b lim: 5 exec/s: 16 rss: 75Mb 00:07:04.586 ###### Recommended dictionary. ###### 00:07:04.586 "\002\000\000\000" # Uses: 0 00:07:04.586 "\001\000\000\000" # Uses: 0 00:07:04.586 ###### End of recommended dictionary. ###### 00:07:04.586 Done 33 runs in 2 second(s) 00:07:04.586 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.844 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.845 20:09:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:04.845 [2024-11-26 20:09:17.559176] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:04.845 [2024-11-26 20:09:17.559243] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611566 ] 00:07:04.845 [2024-11-26 20:09:17.751648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.104 [2024-11-26 20:09:17.785895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.104 [2024-11-26 20:09:17.844954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.104 [2024-11-26 20:09:17.861248] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:05.104 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.104 INFO: Seed: 3541339100 00:07:05.104 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:05.104 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:05.104 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:05.104 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.104 #2 INITED exec/s: 0 rss: 65Mb 00:07:05.104 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.104 This may also happen if the target rejected all inputs we tried so far 00:07:05.104 [2024-11-26 20:09:17.906952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.104 [2024-11-26 20:09:17.906979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.104 [2024-11-26 20:09:17.907040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.104 [2024-11-26 20:09:17.907054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.104 [2024-11-26 20:09:17.907112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.104 [2024-11-26 20:09:17.907125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.104 [2024-11-26 20:09:17.907178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.104 [2024-11-26 20:09:17.907191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.363 NEW_FUNC[1/716]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:05.363 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:05.363 #7 NEW cov: 12260 ft: 12259 corp: 2/40b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 5 ShuffleBytes-CopyPart-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:05.363 [2024-11-26 20:09:18.227953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.363 [2024-11-26 20:09:18.227984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.363 [2024-11-26 20:09:18.228049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.363 [2024-11-26 20:09:18.228063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.363 [2024-11-26 20:09:18.228140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.363 [2024-11-26 20:09:18.228154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.363 [2024-11-26 20:09:18.228214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.363 [2024-11-26 20:09:18.228227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.363 #8 NEW cov: 12373 ft: 12877 corp: 3/79b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 ChangeBit- 00:07:05.363 [2024-11-26 20:09:18.287862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.363 [2024-11-26 20:09:18.287888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.363 [2024-11-26 20:09:18.287954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.363 [2024-11-26 20:09:18.287968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.363 [2024-11-26 20:09:18.288031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.363 [2024-11-26 20:09:18.288045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.623 #9 NEW cov: 12379 ft: 13574 corp: 4/108b lim: 40 exec/s: 0 rss: 73Mb L: 29/39 MS: 1 CrossOver- 00:07:05.623 [2024-11-26 20:09:18.328098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.328124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.328187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.328201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.328266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.328279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.328341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.328357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.623 #10 NEW cov: 12464 ft: 13932 corp: 5/147b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:07:05.623 [2024-11-26 20:09:18.387993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.388019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.388079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.388093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.623 #11 NEW cov: 12464 ft: 14348 corp: 6/169b lim: 40 exec/s: 0 rss: 73Mb L: 22/39 MS: 1 EraseBytes- 00:07:05.623 [2024-11-26 20:09:18.448545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.448571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.448634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:752e7575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.448649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.448723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.448738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.448799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.448812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.448877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:7575750a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.448891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.623 #12 NEW cov: 12464 ft: 14477 corp: 7/209b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertByte- 00:07:05.623 [2024-11-26 20:09:18.488541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.488566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.488629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.488643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.488719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.488733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.488793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7d757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.488809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.623 #13 NEW cov: 12464 ft: 14545 corp: 8/248b lim: 40 exec/s: 0 rss: 73Mb L: 39/40 MS: 1 ChangeBit- 00:07:05.623 [2024-11-26 20:09:18.528534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.623 [2024-11-26 20:09:18.528559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.623 [2024-11-26 20:09:18.528614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:01757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.624 [2024-11-26 20:09:18.528629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.624 [2024-11-26 20:09:18.528688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.624 [2024-11-26 20:09:18.528701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.624 #14 NEW cov: 12464 ft: 14578 corp: 9/278b lim: 40 exec/s: 0 rss: 73Mb L: 30/40 MS: 1 InsertByte- 00:07:05.883 [2024-11-26 20:09:18.568898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.883 [2024-11-26 20:09:18.568923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.883 [2024-11-26 20:09:18.568990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.883 [2024-11-26 20:09:18.569005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.883 [2024-11-26 20:09:18.569067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.883 [2024-11-26 20:09:18.569080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.883 [2024-11-26 20:09:18.569141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.883 [2024-11-26 20:09:18.569154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.883 [2024-11-26 20:09:18.569215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:7575750a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.883 [2024-11-26 20:09:18.569228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.883 #15 NEW cov: 12464 ft: 14637 corp: 10/318b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:07:05.883 [2024-11-26 20:09:18.608453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.883 [2024-11-26 20:09:18.608477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.883 #17 NEW cov: 12464 ft: 15011 corp: 11/330b lim: 40 exec/s: 0 rss: 73Mb L: 12/40 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:05.883 [2024-11-26 20:09:18.649130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a887575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.883 [2024-11-26 20:09:18.649155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.649240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.649254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.649318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.649331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.649395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.649409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.649472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:7575750a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.649485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.884 #18 NEW cov: 12464 ft: 15031 corp: 12/370b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:07:05.884 [2024-11-26 20:09:18.709312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.709338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.709401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.709416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.709480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.709493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.709558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7d757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.709571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.709634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757532 cdw11:7575750a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.709648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.884 #19 NEW cov: 12464 ft: 15101 corp: 13/410b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertByte- 00:07:05.884 [2024-11-26 20:09:18.769327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.769352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.769418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:757575f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.769431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.769516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.769530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.769594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.769611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.884 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:05.884 #20 NEW cov: 12487 ft: 15122 corp: 14/449b lim: 40 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 CopyPart- 00:07:05.884 [2024-11-26 20:09:18.809430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.809456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.809522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.809536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.809603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.809617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.884 [2024-11-26 20:09:18.809676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.884 [2024-11-26 20:09:18.809689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.144 #21 NEW cov: 12487 ft: 15128 corp: 15/485b lim: 40 exec/s: 0 rss: 74Mb L: 36/40 MS: 1 CopyPart- 00:07:06.144 [2024-11-26 20:09:18.869239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00007575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:18.869264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.144 #22 NEW cov: 12487 ft: 15145 corp: 16/497b lim: 40 exec/s: 22 rss: 74Mb L: 12/40 MS: 1 CrossOver- 00:07:06.144 [2024-11-26 20:09:18.929831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:18.929857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:18.929921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:757575f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:18.929936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:18.929999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:18.930012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:18.930073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:18.930090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.144 #23 NEW cov: 12487 ft: 15187 corp: 17/536b lim: 40 exec/s: 23 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:07:06.144 [2024-11-26 20:09:18.989699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:18.989723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:18.989787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:18.989801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.144 #24 NEW cov: 12487 ft: 15196 corp: 18/557b lim: 40 exec/s: 24 rss: 74Mb L: 21/40 MS: 1 InsertRepeatedBytes- 00:07:06.144 [2024-11-26 20:09:19.030108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.030133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:19.030200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.030214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:19.030276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.030290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:19.030355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.030369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.144 #25 NEW cov: 12487 ft: 15201 corp: 19/596b lim: 40 exec/s: 25 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:07:06.144 [2024-11-26 20:09:19.070358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:d1757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.070383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:19.070465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.070479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:19.070540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.070554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:19.070616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.070629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.144 [2024-11-26 20:09:19.070695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:7c75750a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.144 [2024-11-26 20:09:19.070711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.420 #26 NEW cov: 12487 ft: 15212 corp: 20/636b lim: 40 exec/s: 26 rss: 74Mb L: 40/40 MS: 1 InsertByte- 00:07:06.420 [2024-11-26 20:09:19.130116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.420 [2024-11-26 20:09:19.130142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.420 [2024-11-26 20:09:19.130222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.420 [2024-11-26 20:09:19.130237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.420 #27 NEW cov: 12487 ft: 15224 corp: 21/657b lim: 40 exec/s: 27 rss: 74Mb L: 21/40 MS: 1 EraseBytes- 00:07:06.420 [2024-11-26 20:09:19.170615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.420 [2024-11-26 20:09:19.170640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.420 [2024-11-26 20:09:19.170703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75750000 cdw11:00007575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.170717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.170777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.170790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.170853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.170867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.170926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.170940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.421 #28 NEW cov: 12487 ft: 15227 corp: 22/697b lim: 40 exec/s: 28 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:06.421 [2024-11-26 20:09:19.230708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.230733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.230800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.230815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.230879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.230892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.230954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7d757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.230971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.421 #29 NEW cov: 12487 ft: 15237 corp: 23/736b lim: 40 exec/s: 29 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:07:06.421 [2024-11-26 20:09:19.270915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.270940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.271004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.271018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.271080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.271094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.271156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.271170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.421 [2024-11-26 20:09:19.271231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.271245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.421 #30 NEW cov: 12487 ft: 15269 corp: 24/776b lim: 40 exec/s: 30 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:07:06.421 [2024-11-26 20:09:19.330532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.421 [2024-11-26 20:09:19.330557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.691 #31 NEW cov: 12487 ft: 15328 corp: 25/787b lim: 40 exec/s: 31 rss: 74Mb L: 11/40 MS: 1 EraseBytes- 00:07:06.691 [2024-11-26 20:09:19.391255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.691 [2024-11-26 20:09:19.391279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.691 [2024-11-26 20:09:19.391345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.691 [2024-11-26 20:09:19.391359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.691 [2024-11-26 20:09:19.391422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.691 [2024-11-26 20:09:19.391435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.691 [2024-11-26 20:09:19.391499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7d757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.691 [2024-11-26 20:09:19.391512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.691 [2024-11-26 20:09:19.391577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757532 cdw11:7575750a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.691 [2024-11-26 20:09:19.391591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.691 #32 NEW cov: 12487 ft: 15364 corp: 26/827b lim: 40 exec/s: 32 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:07:06.691 [2024-11-26 20:09:19.451447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.691 [2024-11-26 20:09:19.451472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.691 [2024-11-26 20:09:19.451537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75750000 cdw11:00007575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.691 [2024-11-26 20:09:19.451551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.691 [2024-11-26 20:09:19.451617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.451631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.451693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.451706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.451768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.451781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.692 #38 NEW cov: 12487 ft: 15376 corp: 27/867b lim: 40 exec/s: 38 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:06.692 [2024-11-26 20:09:19.491524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.491549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.491618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75750000 cdw11:00007575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.491633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.491694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75755975 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.491708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.491772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.491785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.491846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.491860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.692 #39 NEW cov: 12487 ft: 15417 corp: 28/907b lim: 40 exec/s: 39 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:07:06.692 [2024-11-26 20:09:19.531563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.531588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.531656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.531670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.531731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.531744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.531808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.531821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.692 #40 NEW cov: 12487 ft: 15431 corp: 29/946b lim: 40 exec/s: 40 rss: 75Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:06.692 [2024-11-26 20:09:19.591887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.591911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.591977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:752e0000 cdw11:00007575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.591990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.592068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.592082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.592142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.592156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.692 [2024-11-26 20:09:19.592222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.692 [2024-11-26 20:09:19.592235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.692 #41 NEW cov: 12487 ft: 15448 corp: 30/986b lim: 40 exec/s: 41 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:07:06.952 [2024-11-26 20:09:19.631840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.631866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.631930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.631944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.632013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.632027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.632087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.632100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.952 #42 NEW cov: 12487 ft: 15462 corp: 31/1025b lim: 40 exec/s: 42 rss: 75Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:06.952 [2024-11-26 20:09:19.671898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.671923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.671991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.672006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.672065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.672078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.672142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.672155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.952 #43 NEW cov: 12487 ft: 15478 corp: 32/1061b lim: 40 exec/s: 43 rss: 75Mb L: 36/40 MS: 1 CopyPart- 00:07:06.952 [2024-11-26 20:09:19.712053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.712077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.712142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.712156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.712236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.712249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.712314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.712329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.952 #44 NEW cov: 12487 ft: 15501 corp: 33/1096b lim: 40 exec/s: 44 rss: 75Mb L: 35/40 MS: 1 EraseBytes- 00:07:06.952 [2024-11-26 20:09:19.771962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.771988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.772055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.772069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.952 #45 NEW cov: 12487 ft: 15535 corp: 34/1117b lim: 40 exec/s: 45 rss: 75Mb L: 21/40 MS: 1 CMP- DE: "\377\000"- 00:07:06.952 [2024-11-26 20:09:19.832385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a887575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.832409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.832470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.832484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.832550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.832563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.952 [2024-11-26 20:09:19.832629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.952 [2024-11-26 20:09:19.832643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.952 #46 NEW cov: 12487 ft: 15550 corp: 35/1152b lim: 40 exec/s: 46 rss: 75Mb L: 35/40 MS: 1 EraseBytes- 00:07:07.212 [2024-11-26 20:09:19.892736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a757575 cdw11:7575f575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.212 [2024-11-26 20:09:19.892762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.212 [2024-11-26 20:09:19.892827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.212 [2024-11-26 20:09:19.892840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.212 [2024-11-26 20:09:19.892904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.212 [2024-11-26 20:09:19.892917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.212 [2024-11-26 20:09:19.892982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:7d757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.212 [2024-11-26 20:09:19.892996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.212 [2024-11-26 20:09:19.893057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:7575758b cdw11:8e75750a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.212 [2024-11-26 20:09:19.893071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.212 #47 NEW cov: 12487 ft: 15559 corp: 36/1192b lim: 40 exec/s: 23 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:07:07.212 #47 DONE cov: 12487 ft: 15559 corp: 36/1192b lim: 40 exec/s: 23 rss: 75Mb 00:07:07.212 ###### Recommended dictionary. ###### 00:07:07.212 "\377\000" # Uses: 0 00:07:07.212 ###### End of recommended dictionary. ###### 00:07:07.212 Done 47 runs in 2 second(s) 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:07.212 20:09:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:07.212 [2024-11-26 20:09:20.080497] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:07.212 [2024-11-26 20:09:20.080561] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611921 ] 00:07:07.471 [2024-11-26 20:09:20.353249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.730 [2024-11-26 20:09:20.402292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.730 [2024-11-26 20:09:20.461493] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.730 [2024-11-26 20:09:20.477867] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:07.730 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.730 INFO: Seed: 1861374340 00:07:07.730 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:07.730 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:07.730 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:07.730 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.730 #2 INITED exec/s: 0 rss: 65Mb 00:07:07.730 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.730 This may also happen if the target rejected all inputs we tried so far 00:07:07.730 [2024-11-26 20:09:20.537602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.730 [2024-11-26 20:09:20.537633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.730 [2024-11-26 20:09:20.537699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.730 [2024-11-26 20:09:20.537712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.730 [2024-11-26 20:09:20.537769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.730 [2024-11-26 20:09:20.537783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.989 NEW_FUNC[1/716]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:07.989 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:07.989 #9 NEW cov: 12269 ft: 12271 corp: 2/25b lim: 40 exec/s: 0 rss: 73Mb L: 24/24 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:07.989 [2024-11-26 20:09:20.878251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.989 [2024-11-26 20:09:20.878284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.989 [2024-11-26 20:09:20.878347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.989 [2024-11-26 20:09:20.878361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.989 NEW_FUNC[1/1]: 0xfa8ab8 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:07:07.989 #11 NEW cov: 12385 ft: 13020 corp: 3/44b lim: 40 exec/s: 0 rss: 73Mb L: 19/24 MS: 2 ChangeByte-CrossOver- 00:07:07.989 [2024-11-26 20:09:20.918285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.989 [2024-11-26 20:09:20.918312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.989 [2024-11-26 20:09:20.918372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.989 [2024-11-26 20:09:20.918387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.249 #12 NEW cov: 12391 ft: 13264 corp: 4/64b lim: 40 exec/s: 0 rss: 73Mb L: 20/24 MS: 1 EraseBytes- 00:07:08.249 [2024-11-26 20:09:20.978568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:20.978601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.249 [2024-11-26 20:09:20.978659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:28161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:20.978673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.249 [2024-11-26 20:09:20.978729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:20.978743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.249 #13 NEW cov: 12476 ft: 13514 corp: 5/89b lim: 40 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertByte- 00:07:08.249 [2024-11-26 20:09:21.018530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.018559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.249 [2024-11-26 20:09:21.018619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.018633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.249 #14 NEW cov: 12476 ft: 13556 corp: 6/108b lim: 40 exec/s: 0 rss: 73Mb L: 19/25 MS: 1 CMP- DE: "\000\221\332\331\217\331G\344"- 00:07:08.249 [2024-11-26 20:09:21.079004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.079030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.249 [2024-11-26 20:09:21.079089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:47e41616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.079103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.249 [2024-11-26 20:09:21.079157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.079170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.249 [2024-11-26 20:09:21.079228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.079242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.249 #15 NEW cov: 12476 ft: 13932 corp: 7/140b lim: 40 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\221\332\331\217\331G\344"- 00:07:08.249 [2024-11-26 20:09:21.118798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1616161e cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.118824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.249 [2024-11-26 20:09:21.118884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.249 [2024-11-26 20:09:21.118898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.249 #16 NEW cov: 12476 ft: 14021 corp: 8/159b lim: 40 exec/s: 0 rss: 73Mb L: 19/32 MS: 1 ChangeBit- 00:07:08.509 [2024-11-26 20:09:21.179136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.179163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.179224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.179238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.179297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:47e40091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.179311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.509 #17 NEW cov: 12476 ft: 14056 corp: 9/186b lim: 40 exec/s: 0 rss: 73Mb L: 27/32 MS: 1 PersAutoDict- DE: "\000\221\332\331\217\331G\344"- 00:07:08.509 [2024-11-26 20:09:21.219243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.219268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.219328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.219342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.219399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.219412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.509 #18 NEW cov: 12476 ft: 14133 corp: 10/213b lim: 40 exec/s: 0 rss: 73Mb L: 27/32 MS: 1 InsertRepeatedBytes- 00:07:08.509 [2024-11-26 20:09:21.259502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.259528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.259587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.259606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.259662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.259675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.259735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:160091da cdw11:d98fd947 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.259748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.509 #19 NEW cov: 12476 ft: 14159 corp: 11/247b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:08.509 [2024-11-26 20:09:21.299496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0091da cdw11:d98fd947 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.299521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.299579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e4161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.299594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.299656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.299669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.509 #20 NEW cov: 12476 ft: 14211 corp: 12/275b lim: 40 exec/s: 0 rss: 73Mb L: 28/34 MS: 1 PersAutoDict- DE: "\000\221\332\331\217\331G\344"- 00:07:08.509 [2024-11-26 20:09:21.359786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16160616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.359820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.359879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.359893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.359949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.359962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.360018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:160091da cdw11:d98fd947 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.360031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.509 #21 NEW cov: 12476 ft: 14291 corp: 13/309b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeBit- 00:07:08.509 [2024-11-26 20:09:21.419955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1616161e cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.419982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.420039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.420053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.420108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:47b4b4b4 cdw11:b4b4b4b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.420121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.509 [2024-11-26 20:09:21.420177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:b4b4b4b4 cdw11:b4b4e42c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.509 [2024-11-26 20:09:21.420190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.769 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:08.769 #22 NEW cov: 12499 ft: 14380 corp: 14/341b lim: 40 exec/s: 0 rss: 74Mb L: 32/34 MS: 1 InsertRepeatedBytes- 00:07:08.769 [2024-11-26 20:09:21.480127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16160616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.480153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.480213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.480227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.480298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.480312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.480370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:000091da cdw11:d98fd947 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.480386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.769 #23 NEW cov: 12499 ft: 14414 corp: 15/375b lim: 40 exec/s: 23 rss: 74Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:08.769 [2024-11-26 20:09:21.540279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16160616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.540305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.540366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000016 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.540380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.540437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.540449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.540503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000016 cdw11:0091dad9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.540516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.769 #24 NEW cov: 12499 ft: 14443 corp: 16/412b lim: 40 exec/s: 24 rss: 74Mb L: 37/37 MS: 1 CrossOver- 00:07:08.769 [2024-11-26 20:09:21.580085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:161616b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.580111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.580170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b4b4b4b4 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.580184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.769 #25 NEW cov: 12499 ft: 14501 corp: 17/431b lim: 40 exec/s: 25 rss: 74Mb L: 19/37 MS: 1 CrossOver- 00:07:08.769 [2024-11-26 20:09:21.620391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.620417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.620479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000016 cdw11:16160016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.620492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.620547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.620561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.769 #26 NEW cov: 12499 ft: 14509 corp: 18/458b lim: 40 exec/s: 26 rss: 74Mb L: 27/37 MS: 1 ShuffleBytes- 00:07:08.769 [2024-11-26 20:09:21.680356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.680382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.769 [2024-11-26 20:09:21.680444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.769 [2024-11-26 20:09:21.680461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.029 #27 NEW cov: 12499 ft: 14581 corp: 19/478b lim: 40 exec/s: 27 rss: 74Mb L: 20/37 MS: 1 CrossOver- 00:07:09.029 [2024-11-26 20:09:21.720828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1616ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.720854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.029 [2024-11-26 20:09:21.720913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.720927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.029 [2024-11-26 20:09:21.720988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.721001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.029 [2024-11-26 20:09:21.721057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:0091dad9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.721071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.029 #28 NEW cov: 12499 ft: 14597 corp: 20/515b lim: 40 exec/s: 28 rss: 74Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:09.029 [2024-11-26 20:09:21.760609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16162e16 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.760636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.029 [2024-11-26 20:09:21.760693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.760707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.029 #29 NEW cov: 12499 ft: 14598 corp: 21/534b lim: 40 exec/s: 29 rss: 74Mb L: 19/37 MS: 1 ChangeByte- 00:07:09.029 [2024-11-26 20:09:21.801027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a161616 cdw11:160091da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.801054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.029 [2024-11-26 20:09:21.801112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d98fd947 cdw11:e4161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.029 [2024-11-26 20:09:21.801126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.029 [2024-11-26 20:09:21.801186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.801200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.030 [2024-11-26 20:09:21.801255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.801268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.030 #30 NEW cov: 12499 ft: 14613 corp: 22/566b lim: 40 exec/s: 30 rss: 74Mb L: 32/37 MS: 1 PersAutoDict- DE: "\000\221\332\331\217\331G\344"- 00:07:09.030 [2024-11-26 20:09:21.841153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a161616 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.841180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.030 [2024-11-26 20:09:21.841240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:160091da cdw11:d98fd947 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.841253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.030 [2024-11-26 20:09:21.841308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e4161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.841321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.030 [2024-11-26 20:09:21.841380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.841393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.030 #31 NEW cov: 12499 ft: 14662 corp: 23/602b lim: 40 exec/s: 31 rss: 74Mb L: 36/37 MS: 1 InsertRepeatedBytes- 00:07:09.030 [2024-11-26 20:09:21.901035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0091dad9 cdw11:8fd947e4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.901061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.030 [2024-11-26 20:09:21.901120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.901134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.030 #32 NEW cov: 12499 ft: 14692 corp: 24/622b lim: 40 exec/s: 32 rss: 74Mb L: 20/37 MS: 1 PersAutoDict- DE: "\000\221\332\331\217\331G\344"- 00:07:09.030 [2024-11-26 20:09:21.941268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16162e16 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.941294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.030 [2024-11-26 20:09:21.941352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160000 cdw11:91dad98f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.941366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.030 [2024-11-26 20:09:21.941422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d947e491 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.030 [2024-11-26 20:09:21.941435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.290 #33 NEW cov: 12499 ft: 14737 corp: 25/649b lim: 40 exec/s: 33 rss: 74Mb L: 27/37 MS: 1 PersAutoDict- DE: "\000\221\332\331\217\331G\344"- 00:07:09.290 [2024-11-26 20:09:22.001616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.001641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.001701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.001718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.001775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.001788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.001841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:160091da cdw11:d916168f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.001854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.290 #34 NEW cov: 12499 ft: 14811 corp: 26/685b lim: 40 exec/s: 34 rss: 74Mb L: 36/37 MS: 1 CrossOver- 00:07:09.290 [2024-11-26 20:09:22.041425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1616161e cdw11:164e1616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.041450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.041509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:dad98fd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.041523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.290 #35 NEW cov: 12499 ft: 14819 corp: 27/704b lim: 40 exec/s: 35 rss: 74Mb L: 19/37 MS: 1 ChangeByte- 00:07:09.290 [2024-11-26 20:09:22.081810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a161616 cdw11:160091da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.081835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.081895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d98fd947 cdw11:e4161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.081909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.081963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.081977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.082031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.082044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.290 #36 NEW cov: 12499 ft: 14826 corp: 28/736b lim: 40 exec/s: 36 rss: 74Mb L: 32/37 MS: 1 ShuffleBytes- 00:07:09.290 [2024-11-26 20:09:22.121611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1616161e cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.121636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.121693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0091dad9 cdw11:8fd947e4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.121707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.290 #37 NEW cov: 12499 ft: 14837 corp: 29/753b lim: 40 exec/s: 37 rss: 74Mb L: 17/37 MS: 1 EraseBytes- 00:07:09.290 [2024-11-26 20:09:22.161757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16162e16 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.290 [2024-11-26 20:09:22.161786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.290 [2024-11-26 20:09:22.161843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16160091 cdw11:e2dad98f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.291 [2024-11-26 20:09:22.161857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.291 #38 NEW cov: 12499 ft: 14838 corp: 30/773b lim: 40 exec/s: 38 rss: 74Mb L: 20/37 MS: 1 InsertByte- 00:07:09.291 [2024-11-26 20:09:22.202011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.291 [2024-11-26 20:09:22.202036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.291 [2024-11-26 20:09:22.202095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000016 cdw11:167a1600 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.291 [2024-11-26 20:09:22.202109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.291 [2024-11-26 20:09:22.202164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161600 cdw11:91dad98f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.291 [2024-11-26 20:09:22.202176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.550 #39 NEW cov: 12499 ft: 14845 corp: 31/801b lim: 40 exec/s: 39 rss: 74Mb L: 28/37 MS: 1 InsertByte- 00:07:09.550 [2024-11-26 20:09:22.262333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a161656 cdw11:160091da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.262359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.262419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d98fd947 cdw11:e4161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.262433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.262490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.262503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.262562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.262575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.550 #40 NEW cov: 12499 ft: 14854 corp: 32/833b lim: 40 exec/s: 40 rss: 74Mb L: 32/37 MS: 1 ChangeBit- 00:07:09.550 [2024-11-26 20:09:22.322396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0091da cdw11:d98fd947 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.322421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.322484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e4161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.322498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.322570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.322587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.550 #41 NEW cov: 12499 ft: 14868 corp: 33/861b lim: 40 exec/s: 41 rss: 75Mb L: 28/37 MS: 1 CopyPart- 00:07:09.550 [2024-11-26 20:09:22.382696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:30160616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.382721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.382779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.382792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.382851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.382864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.550 [2024-11-26 20:09:22.382920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:000091da cdw11:d98fd947 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.550 [2024-11-26 20:09:22.382933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.550 #42 NEW cov: 12499 ft: 14889 corp: 34/895b lim: 40 exec/s: 42 rss: 75Mb L: 34/37 MS: 1 ChangeByte- 00:07:09.551 [2024-11-26 20:09:22.442832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16160616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.551 [2024-11-26 20:09:22.442857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.551 [2024-11-26 20:09:22.442918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000016 cdw11:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.551 [2024-11-26 20:09:22.442931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.551 [2024-11-26 20:09:22.442987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.551 [2024-11-26 20:09:22.443000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.551 [2024-11-26 20:09:22.443060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000016 cdw11:0091dad9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.551 [2024-11-26 20:09:22.443073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.841 #43 NEW cov: 12499 ft: 14894 corp: 35/932b lim: 40 exec/s: 43 rss: 75Mb L: 37/37 MS: 1 ChangeBinInt- 00:07:09.841 [2024-11-26 20:09:22.502732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1616161e cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.841 [2024-11-26 20:09:22.502759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.841 [2024-11-26 20:09:22.502815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:008fdad9 cdw11:8fd947e4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.841 [2024-11-26 20:09:22.502828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.841 #44 NEW cov: 12499 ft: 14910 corp: 36/949b lim: 40 exec/s: 22 rss: 75Mb L: 17/37 MS: 1 ChangeBinInt- 00:07:09.841 #44 DONE cov: 12499 ft: 14910 corp: 36/949b lim: 40 exec/s: 22 rss: 75Mb 00:07:09.841 ###### Recommended dictionary. ###### 00:07:09.841 "\000\221\332\331\217\331G\344" # Uses: 6 00:07:09.841 ###### End of recommended dictionary. ###### 00:07:09.841 Done 44 runs in 2 second(s) 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:09.841 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:09.842 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:09.842 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.842 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.842 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.842 20:09:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:09.842 [2024-11-26 20:09:22.689525] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:09.842 [2024-11-26 20:09:22.689605] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612392 ] 00:07:10.130 [2024-11-26 20:09:22.873496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.130 [2024-11-26 20:09:22.906921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.130 [2024-11-26 20:09:22.965888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.130 [2024-11-26 20:09:22.982243] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:10.130 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.130 INFO: Seed: 72401373 00:07:10.130 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:10.130 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:10.130 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:10.130 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.130 #2 INITED exec/s: 0 rss: 65Mb 00:07:10.130 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:10.130 This may also happen if the target rejected all inputs we tried so far 00:07:10.130 [2024-11-26 20:09:23.027022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff5b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.130 [2024-11-26 20:09:23.027062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.712 NEW_FUNC[1/716]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:10.712 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.712 #4 NEW cov: 12262 ft: 12226 corp: 2/10b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 2 CMP-CMP- DE: "t\000\000\000"-"\377\377\377["- 00:07:10.712 [2024-11-26 20:09:23.387958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:74c7ffff cdw11:ff5b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.712 [2024-11-26 20:09:23.387997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.712 NEW_FUNC[1/1]: 0x19c2448 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1190 00:07:10.712 #5 NEW cov: 12383 ft: 12803 corp: 3/19b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:10.712 [2024-11-26 20:09:23.488101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.712 [2024-11-26 20:09:23.488133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.712 #6 NEW cov: 12389 ft: 13124 corp: 4/28b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CMP- DE: "\001\004\000\000\000\000\000\000"- 00:07:10.713 [2024-11-26 20:09:23.548247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.713 [2024-11-26 20:09:23.548278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.713 #16 NEW cov: 12474 ft: 13406 corp: 5/37b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 ChangeByte-InsertByte-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:10.713 [2024-11-26 20:09:23.598322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:74c7ffff cdw11:ff5b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.713 [2024-11-26 20:09:23.598351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.970 #17 NEW cov: 12474 ft: 13556 corp: 6/46b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:10.970 [2024-11-26 20:09:23.688547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff5b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.970 [2024-11-26 20:09:23.688578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.970 #18 NEW cov: 12474 ft: 13715 corp: 7/55b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:10.970 [2024-11-26 20:09:23.748779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01040001 cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.970 [2024-11-26 20:09:23.748809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.970 [2024-11-26 20:09:23.748857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.970 [2024-11-26 20:09:23.748873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.970 #19 NEW cov: 12474 ft: 14503 corp: 8/72b lim: 40 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:07:10.970 [2024-11-26 20:09:23.848999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01040000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.970 [2024-11-26 20:09:23.849035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.970 #20 NEW cov: 12474 ft: 14558 corp: 9/81b lim: 40 exec/s: 0 rss: 73Mb L: 9/17 MS: 1 ShuffleBytes- 00:07:11.228 [2024-11-26 20:09:23.909153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01040000 cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.228 [2024-11-26 20:09:23.909184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.228 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:11.228 #21 NEW cov: 12497 ft: 14608 corp: 10/90b lim: 40 exec/s: 0 rss: 74Mb L: 9/17 MS: 1 ChangeBinInt- 00:07:11.228 [2024-11-26 20:09:23.999355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01047500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.228 [2024-11-26 20:09:23.999384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.228 #22 NEW cov: 12497 ft: 14637 corp: 11/99b lim: 40 exec/s: 22 rss: 74Mb L: 9/17 MS: 1 ChangeByte- 00:07:11.228 [2024-11-26 20:09:24.059576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff5b01fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.228 [2024-11-26 20:09:24.059613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.228 #23 NEW cov: 12497 ft: 14697 corp: 12/108b lim: 40 exec/s: 23 rss: 74Mb L: 9/17 MS: 1 ChangeBinInt- 00:07:11.228 [2024-11-26 20:09:24.150031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.228 [2024-11-26 20:09:24.150062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.228 [2024-11-26 20:09:24.150096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.228 [2024-11-26 20:09:24.150112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.228 [2024-11-26 20:09:24.150141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.228 [2024-11-26 20:09:24.150157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.228 [2024-11-26 20:09:24.150186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.228 [2024-11-26 20:09:24.150201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.486 #27 NEW cov: 12497 ft: 15096 corp: 13/141b lim: 40 exec/s: 27 rss: 74Mb L: 33/33 MS: 4 ChangeBit-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:11.486 [2024-11-26 20:09:24.209933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff5b0031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.486 [2024-11-26 20:09:24.209964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.486 #28 NEW cov: 12497 ft: 15114 corp: 14/151b lim: 40 exec/s: 28 rss: 74Mb L: 10/33 MS: 1 InsertByte- 00:07:11.486 [2024-11-26 20:09:24.260100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:742bffff cdw11:ff5b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.486 [2024-11-26 20:09:24.260131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.486 #29 NEW cov: 12497 ft: 15146 corp: 15/160b lim: 40 exec/s: 29 rss: 74Mb L: 9/33 MS: 1 ChangeByte- 00:07:11.486 [2024-11-26 20:09:24.350352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.486 [2024-11-26 20:09:24.350382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.486 #30 NEW cov: 12497 ft: 15190 corp: 16/169b lim: 40 exec/s: 30 rss: 74Mb L: 9/33 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:07:11.486 [2024-11-26 20:09:24.400397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff5b01fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.486 [2024-11-26 20:09:24.400427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.744 #41 NEW cov: 12497 ft: 15251 corp: 17/178b lim: 40 exec/s: 41 rss: 74Mb L: 9/33 MS: 1 ChangeBit- 00:07:11.744 [2024-11-26 20:09:24.490671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff5b01fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.744 [2024-11-26 20:09:24.490700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.744 #42 NEW cov: 12497 ft: 15253 corp: 18/186b lim: 40 exec/s: 42 rss: 74Mb L: 8/33 MS: 1 EraseBytes- 00:07:11.744 [2024-11-26 20:09:24.580940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01040100 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.744 [2024-11-26 20:09:24.580971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.744 #45 NEW cov: 12497 ft: 15318 corp: 19/200b lim: 40 exec/s: 45 rss: 74Mb L: 14/33 MS: 3 EraseBytes-ShuffleBytes-CrossOver- 00:07:11.744 [2024-11-26 20:09:24.641032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01040000 cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.744 [2024-11-26 20:09:24.641063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.001 #46 NEW cov: 12497 ft: 15360 corp: 20/209b lim: 40 exec/s: 46 rss: 74Mb L: 9/33 MS: 1 ChangeBinInt- 00:07:12.001 [2024-11-26 20:09:24.691231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff010400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.001 [2024-11-26 20:09:24.691261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.002 [2024-11-26 20:09:24.691309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.002 [2024-11-26 20:09:24.691325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.002 #47 NEW cov: 12497 ft: 15375 corp: 21/226b lim: 40 exec/s: 47 rss: 74Mb L: 17/33 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:07:12.002 [2024-11-26 20:09:24.751365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01047500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.002 [2024-11-26 20:09:24.751395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.002 #48 NEW cov: 12497 ft: 15381 corp: 22/235b lim: 40 exec/s: 48 rss: 74Mb L: 9/33 MS: 1 CopyPart- 00:07:12.002 [2024-11-26 20:09:24.822213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ff74 cdw11:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.002 [2024-11-26 20:09:24.822240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.002 #49 NEW cov: 12497 ft: 15455 corp: 23/245b lim: 40 exec/s: 49 rss: 74Mb L: 10/33 MS: 1 PersAutoDict- DE: "t\000\000\000"- 00:07:12.002 [2024-11-26 20:09:24.882537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7400ffff cdw11:ff010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.002 [2024-11-26 20:09:24.882563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.002 [2024-11-26 20:09:24.882625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5b000000 cdw11:04000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.002 [2024-11-26 20:09:24.882655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.002 #50 NEW cov: 12497 ft: 15473 corp: 24/262b lim: 40 exec/s: 50 rss: 74Mb L: 17/33 MS: 1 ShuffleBytes- 00:07:12.260 [2024-11-26 20:09:24.942560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff2dff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.260 [2024-11-26 20:09:24.942586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.260 #51 NEW cov: 12497 ft: 15570 corp: 25/272b lim: 40 exec/s: 51 rss: 74Mb L: 10/33 MS: 1 InsertByte- 00:07:12.260 [2024-11-26 20:09:25.002717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:09000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.260 [2024-11-26 20:09:25.002743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.260 #52 NEW cov: 12497 ft: 15645 corp: 26/281b lim: 40 exec/s: 26 rss: 74Mb L: 9/33 MS: 1 ChangeBinInt- 00:07:12.260 #52 DONE cov: 12497 ft: 15645 corp: 26/281b lim: 40 exec/s: 26 rss: 74Mb 00:07:12.260 ###### Recommended dictionary. ###### 00:07:12.260 "t\000\000\000" # Uses: 1 00:07:12.260 "\377\377\377[" # Uses: 0 00:07:12.260 "\001\004\000\000\000\000\000\000" # Uses: 3 00:07:12.260 ###### End of recommended dictionary. ###### 00:07:12.260 Done 52 runs in 2 second(s) 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:12.260 20:09:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:12.260 [2024-11-26 20:09:25.170098] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:12.260 [2024-11-26 20:09:25.170180] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612930 ] 00:07:12.518 [2024-11-26 20:09:25.357176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.518 [2024-11-26 20:09:25.390347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.776 [2024-11-26 20:09:25.449135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.777 [2024-11-26 20:09:25.465478] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:12.777 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.777 INFO: Seed: 2553397621 00:07:12.777 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:12.777 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:12.777 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:12.777 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.777 #2 INITED exec/s: 0 rss: 65Mb 00:07:12.777 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.777 This may also happen if the target rejected all inputs we tried so far 00:07:12.777 [2024-11-26 20:09:25.513718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.777 [2024-11-26 20:09:25.513755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.777 [2024-11-26 20:09:25.513790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.777 [2024-11-26 20:09:25.513807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.035 NEW_FUNC[1/716]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:13.035 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:13.035 #9 NEW cov: 12258 ft: 12257 corp: 2/19b lim: 40 exec/s: 0 rss: 73Mb L: 18/18 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:13.035 [2024-11-26 20:09:25.875071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.035 [2024-11-26 20:09:25.875103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.035 #10 NEW cov: 12371 ft: 13354 corp: 3/28b lim: 40 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 EraseBytes- 00:07:13.035 [2024-11-26 20:09:25.935127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.035 [2024-11-26 20:09:25.935154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.035 #11 NEW cov: 12377 ft: 13572 corp: 4/37b lim: 40 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 EraseBytes- 00:07:13.292 [2024-11-26 20:09:25.975205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.292 [2024-11-26 20:09:25.975232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.292 #12 NEW cov: 12462 ft: 13854 corp: 5/46b lim: 40 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 ChangeByte- 00:07:13.292 [2024-11-26 20:09:26.035503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:20000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.292 [2024-11-26 20:09:26.035528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.292 [2024-11-26 20:09:26.035588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.292 [2024-11-26 20:09:26.035608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.292 #18 NEW cov: 12462 ft: 13934 corp: 6/64b lim: 40 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 ChangeBit- 00:07:13.292 [2024-11-26 20:09:26.075632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.292 [2024-11-26 20:09:26.075658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.292 [2024-11-26 20:09:26.075719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.292 [2024-11-26 20:09:26.075733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.292 #19 NEW cov: 12462 ft: 13971 corp: 7/81b lim: 40 exec/s: 0 rss: 73Mb L: 17/18 MS: 1 EraseBytes- 00:07:13.293 [2024-11-26 20:09:26.115603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.293 [2024-11-26 20:09:26.115630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.293 #20 NEW cov: 12462 ft: 14095 corp: 8/93b lim: 40 exec/s: 0 rss: 73Mb L: 12/18 MS: 1 CrossOver- 00:07:13.293 [2024-11-26 20:09:26.175814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.293 [2024-11-26 20:09:26.175840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.293 #21 NEW cov: 12462 ft: 14139 corp: 9/102b lim: 40 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 ShuffleBytes- 00:07:13.293 [2024-11-26 20:09:26.215872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.293 [2024-11-26 20:09:26.215897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.550 #22 NEW cov: 12462 ft: 14152 corp: 10/111b lim: 40 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 ChangeBinInt- 00:07:13.550 [2024-11-26 20:09:26.276093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.550 [2024-11-26 20:09:26.276118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.550 #23 NEW cov: 12462 ft: 14210 corp: 11/126b lim: 40 exec/s: 0 rss: 73Mb L: 15/18 MS: 1 CopyPart- 00:07:13.550 [2024-11-26 20:09:26.316167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.550 [2024-11-26 20:09:26.316192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.550 #24 NEW cov: 12462 ft: 14248 corp: 12/141b lim: 40 exec/s: 0 rss: 73Mb L: 15/18 MS: 1 CMP- DE: "\001\006"- 00:07:13.550 [2024-11-26 20:09:26.376378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.550 [2024-11-26 20:09:26.376406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.550 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:13.550 #25 NEW cov: 12485 ft: 14283 corp: 13/156b lim: 40 exec/s: 0 rss: 73Mb L: 15/18 MS: 1 PersAutoDict- DE: "\001\006"- 00:07:13.550 [2024-11-26 20:09:26.416564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fc000000 cdw11:20000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.550 [2024-11-26 20:09:26.416589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.550 [2024-11-26 20:09:26.416652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.550 [2024-11-26 20:09:26.416667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.550 #26 NEW cov: 12485 ft: 14294 corp: 14/174b lim: 40 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 ChangeBinInt- 00:07:13.550 [2024-11-26 20:09:26.476674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.550 [2024-11-26 20:09:26.476700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.806 #27 NEW cov: 12485 ft: 14310 corp: 15/189b lim: 40 exec/s: 27 rss: 74Mb L: 15/18 MS: 1 ChangeBinInt- 00:07:13.806 [2024-11-26 20:09:26.536936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.806 [2024-11-26 20:09:26.536962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.806 [2024-11-26 20:09:26.537021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.806 [2024-11-26 20:09:26.537035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.806 #28 NEW cov: 12485 ft: 14330 corp: 16/206b lim: 40 exec/s: 28 rss: 74Mb L: 17/18 MS: 1 ChangeBit- 00:07:13.806 [2024-11-26 20:09:26.576886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.806 [2024-11-26 20:09:26.576911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.806 #29 NEW cov: 12485 ft: 14336 corp: 17/215b lim: 40 exec/s: 29 rss: 74Mb L: 9/18 MS: 1 ShuffleBytes- 00:07:13.806 [2024-11-26 20:09:26.637105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.806 [2024-11-26 20:09:26.637130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.806 #30 NEW cov: 12485 ft: 14350 corp: 18/224b lim: 40 exec/s: 30 rss: 74Mb L: 9/18 MS: 1 ShuffleBytes- 00:07:13.806 [2024-11-26 20:09:26.677201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.806 [2024-11-26 20:09:26.677225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.806 #31 NEW cov: 12485 ft: 14365 corp: 19/239b lim: 40 exec/s: 31 rss: 74Mb L: 15/18 MS: 1 ShuffleBytes- 00:07:13.806 [2024-11-26 20:09:26.717379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:203a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.806 [2024-11-26 20:09:26.717404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.806 [2024-11-26 20:09:26.717467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.806 [2024-11-26 20:09:26.717481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.063 #32 NEW cov: 12485 ft: 14375 corp: 20/258b lim: 40 exec/s: 32 rss: 74Mb L: 19/19 MS: 1 InsertByte- 00:07:14.063 [2024-11-26 20:09:26.757534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.063 [2024-11-26 20:09:26.757559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.063 [2024-11-26 20:09:26.757623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.757637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.064 #33 NEW cov: 12485 ft: 14386 corp: 21/275b lim: 40 exec/s: 33 rss: 74Mb L: 17/19 MS: 1 ShuffleBytes- 00:07:14.064 [2024-11-26 20:09:26.797642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00006000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.797670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.064 [2024-11-26 20:09:26.797728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.797744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.064 #34 NEW cov: 12485 ft: 14460 corp: 22/292b lim: 40 exec/s: 34 rss: 74Mb L: 17/19 MS: 1 ChangeByte- 00:07:14.064 [2024-11-26 20:09:26.857778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.857802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.064 [2024-11-26 20:09:26.857862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.857877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.064 #35 NEW cov: 12485 ft: 14475 corp: 23/309b lim: 40 exec/s: 35 rss: 74Mb L: 17/19 MS: 1 ShuffleBytes- 00:07:14.064 [2024-11-26 20:09:26.897788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:002a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.897812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.064 #41 NEW cov: 12485 ft: 14496 corp: 24/318b lim: 40 exec/s: 41 rss: 74Mb L: 9/19 MS: 1 ChangeByte- 00:07:14.064 [2024-11-26 20:09:26.938150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.938175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.064 [2024-11-26 20:09:26.938235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.938249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.064 [2024-11-26 20:09:26.938308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.938321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.064 #43 NEW cov: 12485 ft: 14711 corp: 25/344b lim: 40 exec/s: 43 rss: 74Mb L: 26/26 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:14.064 [2024-11-26 20:09:26.978374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.978399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.064 [2024-11-26 20:09:26.978460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.978474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.064 [2024-11-26 20:09:26.978530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.978544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.064 [2024-11-26 20:09:26.978606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.064 [2024-11-26 20:09:26.978619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.322 #44 NEW cov: 12485 ft: 15214 corp: 26/379b lim: 40 exec/s: 44 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:14.322 [2024-11-26 20:09:27.018262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fc000000 cdw11:00000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.018288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.322 [2024-11-26 20:09:27.018348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.018377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.322 #45 NEW cov: 12485 ft: 15222 corp: 27/397b lim: 40 exec/s: 45 rss: 74Mb L: 18/35 MS: 1 ShuffleBytes- 00:07:14.322 [2024-11-26 20:09:27.078699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000097 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.078724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.322 [2024-11-26 20:09:27.078785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:97979797 cdw11:97979797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.078799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.322 [2024-11-26 20:09:27.078860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:97979797 cdw11:97979797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.078873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.322 [2024-11-26 20:09:27.078928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:97979700 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.078945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.322 #46 NEW cov: 12485 ft: 15226 corp: 28/434b lim: 40 exec/s: 46 rss: 74Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:14.322 [2024-11-26 20:09:27.118522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.118546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.322 [2024-11-26 20:09:27.118608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.322 [2024-11-26 20:09:27.118622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.322 #47 NEW cov: 12485 ft: 15246 corp: 29/454b lim: 40 exec/s: 47 rss: 74Mb L: 20/37 MS: 1 InsertRepeatedBytes- 00:07:14.322 [2024-11-26 20:09:27.158643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.323 [2024-11-26 20:09:27.158669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.323 [2024-11-26 20:09:27.158726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06010600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.323 [2024-11-26 20:09:27.158740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.323 #48 NEW cov: 12485 ft: 15252 corp: 30/471b lim: 40 exec/s: 48 rss: 74Mb L: 17/37 MS: 1 PersAutoDict- DE: "\001\006"- 00:07:14.323 [2024-11-26 20:09:27.218802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:04009600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.323 [2024-11-26 20:09:27.218827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.323 [2024-11-26 20:09:27.218886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.323 [2024-11-26 20:09:27.218899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.580 #49 NEW cov: 12485 ft: 15260 corp: 31/489b lim: 40 exec/s: 49 rss: 74Mb L: 18/37 MS: 1 InsertByte- 00:07:14.580 [2024-11-26 20:09:27.278873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.278898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.580 #50 NEW cov: 12485 ft: 15268 corp: 32/499b lim: 40 exec/s: 50 rss: 74Mb L: 10/37 MS: 1 InsertByte- 00:07:14.580 [2024-11-26 20:09:27.319369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:5e5e5e5e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.319393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.580 [2024-11-26 20:09:27.319453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.319467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.580 [2024-11-26 20:09:27.319523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.319536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.580 [2024-11-26 20:09:27.319604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.319618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.580 #51 NEW cov: 12485 ft: 15320 corp: 33/538b lim: 40 exec/s: 51 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:14.580 [2024-11-26 20:09:27.359105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.359130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.580 #53 NEW cov: 12485 ft: 15345 corp: 34/548b lim: 40 exec/s: 53 rss: 74Mb L: 10/39 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:14.580 [2024-11-26 20:09:27.419255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.419279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.580 #54 NEW cov: 12485 ft: 15353 corp: 35/557b lim: 40 exec/s: 54 rss: 75Mb L: 9/39 MS: 1 CrossOver- 00:07:14.580 [2024-11-26 20:09:27.479789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.479813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.580 [2024-11-26 20:09:27.479872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00979797 cdw11:97979797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.479887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.580 [2024-11-26 20:09:27.479948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:97979797 cdw11:97979797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.479961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.580 [2024-11-26 20:09:27.480018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:97979797 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.580 [2024-11-26 20:09:27.480032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.838 #55 NEW cov: 12485 ft: 15370 corp: 36/591b lim: 40 exec/s: 27 rss: 75Mb L: 34/39 MS: 1 CrossOver- 00:07:14.838 #55 DONE cov: 12485 ft: 15370 corp: 36/591b lim: 40 exec/s: 27 rss: 75Mb 00:07:14.838 ###### Recommended dictionary. ###### 00:07:14.838 "\001\006" # Uses: 3 00:07:14.838 ###### End of recommended dictionary. ###### 00:07:14.838 Done 55 runs in 2 second(s) 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.838 20:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:14.838 [2024-11-26 20:09:27.666345] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:14.838 [2024-11-26 20:09:27.666417] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613221 ] 00:07:15.095 [2024-11-26 20:09:27.858728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.095 [2024-11-26 20:09:27.892169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.095 [2024-11-26 20:09:27.951272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.095 [2024-11-26 20:09:27.967634] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:15.095 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.095 INFO: Seed: 760428657 00:07:15.095 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:15.095 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:15.095 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:15.095 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.095 #2 INITED exec/s: 0 rss: 65Mb 00:07:15.095 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.095 This may also happen if the target rejected all inputs we tried so far 00:07:15.095 [2024-11-26 20:09:28.016792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.095 [2024-11-26 20:09:28.016821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.095 [2024-11-26 20:09:28.016883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.095 [2024-11-26 20:09:28.016899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.095 [2024-11-26 20:09:28.016960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.095 [2024-11-26 20:09:28.016974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.095 [2024-11-26 20:09:28.017032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.095 [2024-11-26 20:09:28.017046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.607 NEW_FUNC[1/717]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:15.607 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.607 #9 NEW cov: 12252 ft: 12251 corp: 2/35b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:15.607 NEW_FUNC[1/2]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:15.607 NEW_FUNC[2/2]: 0x138e9e8 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1768 00:07:15.607 #11 NEW cov: 12398 ft: 13582 corp: 3/43b lim: 35 exec/s: 0 rss: 73Mb L: 8/34 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:15.607 #12 NEW cov: 12404 ft: 13873 corp: 4/52b lim: 35 exec/s: 0 rss: 73Mb L: 9/34 MS: 1 CrossOver- 00:07:15.607 [2024-11-26 20:09:28.437813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.607 [2024-11-26 20:09:28.437847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.607 [2024-11-26 20:09:28.437913] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.607 [2024-11-26 20:09:28.437928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.607 [2024-11-26 20:09:28.437992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.607 [2024-11-26 20:09:28.438005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.607 [2024-11-26 20:09:28.438066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.607 [2024-11-26 20:09:28.438079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.607 #13 NEW cov: 12489 ft: 14062 corp: 5/86b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:15.607 [2024-11-26 20:09:28.498151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.607 [2024-11-26 20:09:28.498178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.607 [2024-11-26 20:09:28.498243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.607 [2024-11-26 20:09:28.498260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.607 [2024-11-26 20:09:28.498325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.607 [2024-11-26 20:09:28.498339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.607 [2024-11-26 20:09:28.498399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.608 [2024-11-26 20:09:28.498414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.608 [2024-11-26 20:09:28.498475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.608 [2024-11-26 20:09:28.498489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.864 #14 NEW cov: 12496 ft: 14276 corp: 6/121b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertByte- 00:07:15.864 [2024-11-26 20:09:28.557868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.557897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.864 #16 NEW cov: 12496 ft: 14652 corp: 7/140b lim: 35 exec/s: 0 rss: 73Mb L: 19/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:15.864 [2024-11-26 20:09:28.598385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.598410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.598474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.598490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.598553] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.598567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.598632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.598645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.598706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.598720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.864 #17 NEW cov: 12496 ft: 14751 corp: 8/175b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:15.864 [2024-11-26 20:09:28.658432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.658458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.658520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.658534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.658601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.658615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.658679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.658692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.864 #18 NEW cov: 12496 ft: 14770 corp: 9/209b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeByte- 00:07:15.864 #19 NEW cov: 12496 ft: 14880 corp: 10/217b lim: 35 exec/s: 0 rss: 74Mb L: 8/35 MS: 1 ShuffleBytes- 00:07:15.864 [2024-11-26 20:09:28.758928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.758954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.759017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.864 [2024-11-26 20:09:28.759050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.864 [2024-11-26 20:09:28.759117] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.865 [2024-11-26 20:09:28.759131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.865 [2024-11-26 20:09:28.759192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.865 [2024-11-26 20:09:28.759206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.865 [2024-11-26 20:09:28.759271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.865 [2024-11-26 20:09:28.759285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.865 #20 NEW cov: 12496 ft: 14922 corp: 11/252b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:16.122 [2024-11-26 20:09:28.798901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.798927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:28.798990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.799005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:28.799066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.799079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:28.799141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.799155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.122 #26 NEW cov: 12496 ft: 14953 corp: 12/283b lim: 35 exec/s: 0 rss: 74Mb L: 31/35 MS: 1 EraseBytes- 00:07:16.122 [2024-11-26 20:09:28.858705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.858730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.122 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:16.122 #27 NEW cov: 12519 ft: 14976 corp: 13/302b lim: 35 exec/s: 0 rss: 74Mb L: 19/35 MS: 1 ChangeBinInt- 00:07:16.122 [2024-11-26 20:09:28.919179] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.919205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:28.919268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.919282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:28.919345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.919359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:28.919423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:28.919439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.122 #28 NEW cov: 12519 ft: 15093 corp: 14/336b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeByte- 00:07:16.122 #31 NEW cov: 12519 ft: 15158 corp: 15/343b lim: 35 exec/s: 31 rss: 74Mb L: 7/35 MS: 3 EraseBytes-CMP-InsertByte- DE: "\007\000"- 00:07:16.122 [2024-11-26 20:09:29.039635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:29.039662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:29.039726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:29.039742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:29.039821] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:29.039836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:29.039897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:29.039911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.122 [2024-11-26 20:09:29.039977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.122 [2024-11-26 20:09:29.039991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.382 #32 NEW cov: 12519 ft: 15216 corp: 16/378b lim: 35 exec/s: 32 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:16.382 [2024-11-26 20:09:29.079762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.079788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.079851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.079867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.079928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.079942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.080007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.080021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.080082] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.080095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.382 #33 NEW cov: 12519 ft: 15246 corp: 17/413b lim: 35 exec/s: 33 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:16.382 [2024-11-26 20:09:29.139934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.139963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.140030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.140045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.140106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.140120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.140184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.140198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.140262] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.140276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.382 #34 NEW cov: 12519 ft: 15253 corp: 18/448b lim: 35 exec/s: 34 rss: 74Mb L: 35/35 MS: 1 ChangeASCIIInt- 00:07:16.382 [2024-11-26 20:09:29.180064] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.180090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.180157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.180172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.180234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.180246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.180308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.180322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.180385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.180399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.382 #35 NEW cov: 12519 ft: 15279 corp: 19/483b lim: 35 exec/s: 35 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:16.382 #36 NEW cov: 12519 ft: 15325 corp: 20/492b lim: 35 exec/s: 36 rss: 74Mb L: 9/35 MS: 1 ShuffleBytes- 00:07:16.382 [2024-11-26 20:09:29.280069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.280098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.382 [2024-11-26 20:09:29.280165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.382 [2024-11-26 20:09:29.280180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.640 #37 NEW cov: 12519 ft: 15523 corp: 21/517b lim: 35 exec/s: 37 rss: 74Mb L: 25/35 MS: 1 InsertRepeatedBytes- 00:07:16.641 [2024-11-26 20:09:29.340545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.340573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMMAND SEQUENCE ERROR (00/0c) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.340631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.340647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.340722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.340737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.340802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.340816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.340880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.340894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.641 NEW_FUNC[1/1]: 0x46ed08 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:07:16.641 #38 NEW cov: 12553 ft: 15600 corp: 22/552b lim: 35 exec/s: 38 rss: 75Mb L: 35/35 MS: 1 PersAutoDict- DE: "\007\000"- 00:07:16.641 [2024-11-26 20:09:29.400524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.400551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.400615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.400631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.400696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.400711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.400777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.400792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.641 #39 NEW cov: 12553 ft: 15619 corp: 23/586b lim: 35 exec/s: 39 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:16.641 [2024-11-26 20:09:29.440503] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.440529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.440593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.440615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.440682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.440700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.641 #40 NEW cov: 12553 ft: 15731 corp: 24/610b lim: 35 exec/s: 40 rss: 75Mb L: 24/35 MS: 1 EraseBytes- 00:07:16.641 [2024-11-26 20:09:29.480961] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.480987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.481052] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.481066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.481133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.481147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.481211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.481225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.641 [2024-11-26 20:09:29.481289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.481303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.641 #41 NEW cov: 12553 ft: 15743 corp: 25/645b lim: 35 exec/s: 41 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:16.641 [2024-11-26 20:09:29.520325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.641 [2024-11-26 20:09:29.520351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.641 #42 NEW cov: 12553 ft: 15785 corp: 26/652b lim: 35 exec/s: 42 rss: 75Mb L: 7/35 MS: 1 ChangeBinInt- 00:07:16.899 [2024-11-26 20:09:29.580532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.580558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.899 #43 NEW cov: 12553 ft: 15791 corp: 27/660b lim: 35 exec/s: 43 rss: 75Mb L: 8/35 MS: 1 InsertByte- 00:07:16.899 [2024-11-26 20:09:29.640709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.640734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.899 #44 NEW cov: 12553 ft: 15833 corp: 28/667b lim: 35 exec/s: 44 rss: 75Mb L: 7/35 MS: 1 ChangeBit- 00:07:16.899 [2024-11-26 20:09:29.681482] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.681508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.899 [2024-11-26 20:09:29.681570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.681585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.899 [2024-11-26 20:09:29.681650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.681664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.899 [2024-11-26 20:09:29.681731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.681744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.899 [2024-11-26 20:09:29.681808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.681821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.899 #45 NEW cov: 12553 ft: 15847 corp: 29/702b lim: 35 exec/s: 45 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:07:16.899 [2024-11-26 20:09:29.721580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.899 [2024-11-26 20:09:29.721611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.900 [2024-11-26 20:09:29.721674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.721690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.900 [2024-11-26 20:09:29.721751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.721764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.900 [2024-11-26 20:09:29.721828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.721842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.900 [2024-11-26 20:09:29.721903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.721916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.900 #46 NEW cov: 12553 ft: 15906 corp: 30/737b lim: 35 exec/s: 46 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:07:16.900 [2024-11-26 20:09:29.781320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.781346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.900 #47 NEW cov: 12553 ft: 15943 corp: 31/756b lim: 35 exec/s: 47 rss: 75Mb L: 19/35 MS: 1 CrossOver- 00:07:16.900 [2024-11-26 20:09:29.821518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:4 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.821545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.900 [2024-11-26 20:09:29.821608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:5 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.821625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.900 [2024-11-26 20:09:29.821682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:6 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.900 [2024-11-26 20:09:29.821698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.156 NEW_FUNC[1/1]: 0x46fc18 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:07:17.156 #51 NEW cov: 12577 ft: 16012 corp: 32/779b lim: 35 exec/s: 51 rss: 75Mb L: 23/35 MS: 4 ChangeByte-PersAutoDict-CrossOver-InsertRepeatedBytes- DE: "\007\000"- 00:07:17.156 [2024-11-26 20:09:29.862003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.862029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.862091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.862108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.862170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.862184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.862245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000037 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.862258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.862320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.862334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.156 #52 NEW cov: 12577 ft: 16020 corp: 33/814b lim: 35 exec/s: 52 rss: 75Mb L: 35/35 MS: 1 ChangeASCIIInt- 00:07:17.156 [2024-11-26 20:09:29.922206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.922234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMMAND SEQUENCE ERROR (00/0c) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.922295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.922324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.922383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.922396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.922456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.922470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.922531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000037 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.922545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.156 #53 NEW cov: 12577 ft: 16038 corp: 34/849b lim: 35 exec/s: 53 rss: 75Mb L: 35/35 MS: 1 ChangeASCIIInt- 00:07:17.156 [2024-11-26 20:09:29.981993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:4 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.982021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.982079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:5 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.982098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.156 [2024-11-26 20:09:29.982156] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:6 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.156 [2024-11-26 20:09:29.982172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.156 #54 NEW cov: 12577 ft: 16107 corp: 35/872b lim: 35 exec/s: 27 rss: 75Mb L: 23/35 MS: 1 ChangeBinInt- 00:07:17.156 #54 DONE cov: 12577 ft: 16107 corp: 35/872b lim: 35 exec/s: 27 rss: 75Mb 00:07:17.156 ###### Recommended dictionary. ###### 00:07:17.156 "\007\000" # Uses: 2 00:07:17.156 ###### End of recommended dictionary. ###### 00:07:17.156 Done 54 runs in 2 second(s) 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.413 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.414 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.414 20:09:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:17.414 [2024-11-26 20:09:30.172419] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:17.414 [2024-11-26 20:09:30.172491] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613752 ] 00:07:17.671 [2024-11-26 20:09:30.362816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.671 [2024-11-26 20:09:30.397702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.671 [2024-11-26 20:09:30.456671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.671 [2024-11-26 20:09:30.473025] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:17.671 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.671 INFO: Seed: 3268440027 00:07:17.671 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:17.671 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:17.671 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:17.671 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.671 #2 INITED exec/s: 0 rss: 65Mb 00:07:17.671 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.671 This may also happen if the target rejected all inputs we tried so far 00:07:17.671 [2024-11-26 20:09:30.517843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.671 [2024-11-26 20:09:30.517877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.671 [2024-11-26 20:09:30.517927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.671 [2024-11-26 20:09:30.517943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.929 NEW_FUNC[1/716]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:17.929 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.929 #15 NEW cov: 12240 ft: 12239 corp: 2/16b lim: 35 exec/s: 0 rss: 73Mb L: 15/15 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:18.187 [2024-11-26 20:09:30.868755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:30.868793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.187 [2024-11-26 20:09:30.868828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:30.868845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.187 #16 NEW cov: 12353 ft: 12757 corp: 3/31b lim: 35 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 ShuffleBytes- 00:07:18.187 [2024-11-26 20:09:30.968879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:30.968911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.187 [2024-11-26 20:09:30.968960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:30.968976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.187 #17 NEW cov: 12359 ft: 13052 corp: 4/48b lim: 35 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 CopyPart- 00:07:18.187 [2024-11-26 20:09:31.029032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:31.029062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.187 [2024-11-26 20:09:31.029111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:31.029127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.187 #23 NEW cov: 12444 ft: 13425 corp: 5/63b lim: 35 exec/s: 0 rss: 73Mb L: 15/17 MS: 1 ChangeBinInt- 00:07:18.187 [2024-11-26 20:09:31.089174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:31.089205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.187 [2024-11-26 20:09:31.089263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.187 [2024-11-26 20:09:31.089279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.446 #24 NEW cov: 12444 ft: 13523 corp: 6/78b lim: 35 exec/s: 0 rss: 73Mb L: 15/17 MS: 1 ChangeBit- 00:07:18.446 [2024-11-26 20:09:31.139336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.139368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.446 [2024-11-26 20:09:31.139417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.139433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.446 [2024-11-26 20:09:31.139463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.139479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.446 #25 NEW cov: 12444 ft: 13813 corp: 7/104b lim: 35 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:18.446 [2024-11-26 20:09:31.229557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.229590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.446 [2024-11-26 20:09:31.229648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.229664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.446 #26 NEW cov: 12444 ft: 13901 corp: 8/121b lim: 35 exec/s: 0 rss: 73Mb L: 17/26 MS: 1 ChangeBit- 00:07:18.446 [2024-11-26 20:09:31.319882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.319914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.446 [2024-11-26 20:09:31.319949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.319966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.446 [2024-11-26 20:09:31.319996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.320011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.446 [2024-11-26 20:09:31.320041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.446 [2024-11-26 20:09:31.320056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.446 #27 NEW cov: 12444 ft: 14388 corp: 9/153b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:18.704 [2024-11-26 20:09:31.379999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.380030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.380065] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.380085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.704 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:18.704 #29 NEW cov: 12467 ft: 14450 corp: 10/173b lim: 35 exec/s: 0 rss: 73Mb L: 20/32 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:18.704 [2024-11-26 20:09:31.430189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.430220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.430255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.430271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.430302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.430317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.430347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.430362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.704 #30 NEW cov: 12467 ft: 14496 corp: 11/205b lim: 35 exec/s: 30 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:18.704 [2024-11-26 20:09:31.520327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.520356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.520405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.520421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.520452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.520467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.704 #31 NEW cov: 12467 ft: 14553 corp: 12/231b lim: 35 exec/s: 31 rss: 74Mb L: 26/32 MS: 1 ChangeBinInt- 00:07:18.704 [2024-11-26 20:09:31.610640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.610670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.610704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.610721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.610751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.610766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.704 [2024-11-26 20:09:31.610796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.704 [2024-11-26 20:09:31.610811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.963 #32 NEW cov: 12467 ft: 14590 corp: 13/263b lim: 35 exec/s: 32 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\020"- 00:07:18.963 [2024-11-26 20:09:31.700879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.700910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.963 [2024-11-26 20:09:31.700968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.700984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.963 NEW_FUNC[1/1]: 0x46e838 in feat_volatile_write_cache /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:312 00:07:18.963 #33 NEW cov: 12481 ft: 14663 corp: 14/286b lim: 35 exec/s: 33 rss: 74Mb L: 23/32 MS: 1 InsertRepeatedBytes- 00:07:18.963 [2024-11-26 20:09:31.791144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.791174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.963 [2024-11-26 20:09:31.791223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.791239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.963 [2024-11-26 20:09:31.791269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.791284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.963 [2024-11-26 20:09:31.791314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.791329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.963 #34 NEW cov: 12481 ft: 14745 corp: 15/318b lim: 35 exec/s: 34 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:18.963 [2024-11-26 20:09:31.851312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.851343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.963 [2024-11-26 20:09:31.851378] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.851394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.963 [2024-11-26 20:09:31.851425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.851440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.963 [2024-11-26 20:09:31.851469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.963 [2024-11-26 20:09:31.851484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.221 #35 NEW cov: 12481 ft: 14784 corp: 16/351b lim: 35 exec/s: 35 rss: 74Mb L: 33/33 MS: 1 CrossOver- 00:07:19.221 [2024-11-26 20:09:31.941441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:31.941471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.221 [2024-11-26 20:09:31.941513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:31.941529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.221 #36 NEW cov: 12481 ft: 14824 corp: 17/368b lim: 35 exec/s: 36 rss: 74Mb L: 17/33 MS: 1 ChangeBinInt- 00:07:19.221 [2024-11-26 20:09:32.031621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:32.031651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.221 [2024-11-26 20:09:32.031700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:32.031716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.221 #37 NEW cov: 12481 ft: 14880 corp: 18/387b lim: 35 exec/s: 37 rss: 74Mb L: 19/33 MS: 1 InsertRepeatedBytes- 00:07:19.221 [2024-11-26 20:09:32.081869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:32.081900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.221 [2024-11-26 20:09:32.081935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:32.081951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.221 [2024-11-26 20:09:32.081982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:32.081997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.221 #38 NEW cov: 12481 ft: 14890 corp: 19/413b lim: 35 exec/s: 38 rss: 74Mb L: 26/33 MS: 1 ShuffleBytes- 00:07:19.221 [2024-11-26 20:09:32.131961] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.221 [2024-11-26 20:09:32.131993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.222 [2024-11-26 20:09:32.132028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.222 [2024-11-26 20:09:32.132059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.222 [2024-11-26 20:09:32.132090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.222 [2024-11-26 20:09:32.132106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.479 #39 NEW cov: 12481 ft: 14904 corp: 20/438b lim: 35 exec/s: 39 rss: 74Mb L: 25/33 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:07:19.479 [2024-11-26 20:09:32.192149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.192180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.479 [2024-11-26 20:09:32.192215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.192231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.479 [2024-11-26 20:09:32.192266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.192281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.479 #40 NEW cov: 12481 ft: 14969 corp: 21/465b lim: 35 exec/s: 40 rss: 74Mb L: 27/33 MS: 1 InsertByte- 00:07:19.479 [2024-11-26 20:09:32.282419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.282450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.479 [2024-11-26 20:09:32.282485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.282501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.479 [2024-11-26 20:09:32.282531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.282547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.479 [2024-11-26 20:09:32.282592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.282616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.479 #41 NEW cov: 12481 ft: 14991 corp: 22/495b lim: 35 exec/s: 41 rss: 74Mb L: 30/33 MS: 1 EraseBytes- 00:07:19.479 [2024-11-26 20:09:32.382577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.382616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.479 [2024-11-26 20:09:32.382651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.479 [2024-11-26 20:09:32.382667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.737 #42 NEW cov: 12481 ft: 15002 corp: 23/512b lim: 35 exec/s: 42 rss: 74Mb L: 17/33 MS: 1 ChangeByte- 00:07:19.737 [2024-11-26 20:09:32.472857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.737 [2024-11-26 20:09:32.472888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.737 [2024-11-26 20:09:32.472924] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.737 [2024-11-26 20:09:32.472940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.737 [2024-11-26 20:09:32.472970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.737 [2024-11-26 20:09:32.472985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.737 #43 NEW cov: 12481 ft: 15013 corp: 24/534b lim: 35 exec/s: 21 rss: 74Mb L: 22/33 MS: 1 CrossOver- 00:07:19.737 #43 DONE cov: 12481 ft: 15013 corp: 24/534b lim: 35 exec/s: 21 rss: 74Mb 00:07:19.737 ###### Recommended dictionary. ###### 00:07:19.737 "\000\000\000\000\000\000\000\020" # Uses: 1 00:07:19.737 ###### End of recommended dictionary. ###### 00:07:19.737 Done 43 runs in 2 second(s) 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.737 20:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:19.738 [2024-11-26 20:09:32.658933] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:19.738 [2024-11-26 20:09:32.659018] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614196 ] 00:07:19.995 [2024-11-26 20:09:32.847462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.995 [2024-11-26 20:09:32.880920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.254 [2024-11-26 20:09:32.940056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.254 [2024-11-26 20:09:32.956403] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:20.254 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.254 INFO: Seed: 1456455286 00:07:20.254 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:20.254 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:20.254 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:20.254 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.254 #2 INITED exec/s: 0 rss: 65Mb 00:07:20.254 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.254 This may also happen if the target rejected all inputs we tried so far 00:07:20.254 [2024-11-26 20:09:33.011645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.254 [2024-11-26 20:09:33.011675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.521 NEW_FUNC[1/717]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:20.521 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.521 #18 NEW cov: 12344 ft: 12336 corp: 2/42b lim: 105 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:20.521 [2024-11-26 20:09:33.332426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.521 [2024-11-26 20:09:33.332461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.521 #19 NEW cov: 12457 ft: 12996 corp: 3/83b lim: 105 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 CopyPart- 00:07:20.521 [2024-11-26 20:09:33.392528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.521 [2024-11-26 20:09:33.392557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.521 #20 NEW cov: 12463 ft: 13254 corp: 4/124b lim: 105 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 ShuffleBytes- 00:07:20.782 [2024-11-26 20:09:33.452747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744065304166399 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.782 [2024-11-26 20:09:33.452777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.782 #21 NEW cov: 12548 ft: 13567 corp: 5/165b lim: 105 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 ChangeBit- 00:07:20.782 [2024-11-26 20:09:33.492766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.782 [2024-11-26 20:09:33.492794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.782 #23 NEW cov: 12548 ft: 13710 corp: 6/201b lim: 105 exec/s: 0 rss: 73Mb L: 36/41 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:20.782 [2024-11-26 20:09:33.532910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.782 [2024-11-26 20:09:33.532938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.782 #24 NEW cov: 12548 ft: 13907 corp: 7/238b lim: 105 exec/s: 0 rss: 73Mb L: 37/41 MS: 1 CrossOver- 00:07:20.782 [2024-11-26 20:09:33.573025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.782 [2024-11-26 20:09:33.573052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.782 #30 NEW cov: 12548 ft: 13952 corp: 8/279b lim: 105 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 CopyPart- 00:07:20.782 [2024-11-26 20:09:33.613153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.782 [2024-11-26 20:09:33.613180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.782 #31 NEW cov: 12548 ft: 14084 corp: 9/315b lim: 105 exec/s: 0 rss: 73Mb L: 36/41 MS: 1 ChangeByte- 00:07:20.782 [2024-11-26 20:09:33.673424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.782 [2024-11-26 20:09:33.673450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.782 [2024-11-26 20:09:33.673491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.782 [2024-11-26 20:09:33.673507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.782 #32 NEW cov: 12548 ft: 14529 corp: 10/367b lim: 105 exec/s: 0 rss: 73Mb L: 52/52 MS: 1 CrossOver- 00:07:21.040 [2024-11-26 20:09:33.713467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18374686471266238463 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.040 [2024-11-26 20:09:33.713499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.040 #33 NEW cov: 12548 ft: 14645 corp: 11/408b lim: 105 exec/s: 0 rss: 73Mb L: 41/52 MS: 1 ChangeBit- 00:07:21.040 [2024-11-26 20:09:33.773604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.040 [2024-11-26 20:09:33.773632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.040 #34 NEW cov: 12548 ft: 14680 corp: 12/449b lim: 105 exec/s: 0 rss: 73Mb L: 41/52 MS: 1 ChangeBit- 00:07:21.040 [2024-11-26 20:09:33.813955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:2816 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.040 [2024-11-26 20:09:33.813983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.040 [2024-11-26 20:09:33.814030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.041 [2024-11-26 20:09:33.814048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.041 [2024-11-26 20:09:33.814104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.041 [2024-11-26 20:09:33.814120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.041 #35 NEW cov: 12548 ft: 15045 corp: 13/527b lim: 105 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 CrossOver- 00:07:21.041 [2024-11-26 20:09:33.873894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446462598917390335 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.041 [2024-11-26 20:09:33.873923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.041 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:21.041 #36 NEW cov: 12571 ft: 15090 corp: 14/568b lim: 105 exec/s: 0 rss: 74Mb L: 41/78 MS: 1 CMP- DE: "\000\000\000\037"- 00:07:21.041 [2024-11-26 20:09:33.913978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16068843462052544511 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.041 [2024-11-26 20:09:33.914007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.041 #37 NEW cov: 12571 ft: 15109 corp: 15/609b lim: 105 exec/s: 0 rss: 74Mb L: 41/78 MS: 1 ChangeBit- 00:07:21.300 [2024-11-26 20:09:33.974344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:33.974372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.300 [2024-11-26 20:09:33.974410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:33.974426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.300 #38 NEW cov: 12571 ft: 15132 corp: 16/663b lim: 105 exec/s: 38 rss: 74Mb L: 54/78 MS: 1 CrossOver- 00:07:21.300 [2024-11-26 20:09:34.034621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18374686471266238463 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.034673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.300 [2024-11-26 20:09:34.034733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.034757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.300 [2024-11-26 20:09:34.034816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.034832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.300 #39 NEW cov: 12571 ft: 15169 corp: 17/731b lim: 105 exec/s: 39 rss: 74Mb L: 68/78 MS: 1 CrossOver- 00:07:21.300 [2024-11-26 20:09:34.074731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:2816 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.074768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.300 [2024-11-26 20:09:34.074832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.074847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.300 [2024-11-26 20:09:34.074907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.074922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.300 #40 NEW cov: 12571 ft: 15181 corp: 18/809b lim: 105 exec/s: 40 rss: 74Mb L: 78/78 MS: 1 ShuffleBytes- 00:07:21.300 [2024-11-26 20:09:34.134795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.134823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.300 [2024-11-26 20:09:34.134887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.134903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.300 #41 NEW cov: 12571 ft: 15188 corp: 19/858b lim: 105 exec/s: 41 rss: 74Mb L: 49/78 MS: 1 InsertRepeatedBytes- 00:07:21.300 [2024-11-26 20:09:34.194949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.194976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.300 [2024-11-26 20:09:34.195030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.300 [2024-11-26 20:09:34.195046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.571 #42 NEW cov: 12571 ft: 15202 corp: 20/907b lim: 105 exec/s: 42 rss: 74Mb L: 49/78 MS: 1 ShuffleBytes- 00:07:21.571 [2024-11-26 20:09:34.254982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.572 [2024-11-26 20:09:34.255011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.572 #43 NEW cov: 12571 ft: 15237 corp: 21/944b lim: 105 exec/s: 43 rss: 74Mb L: 37/78 MS: 1 ShuffleBytes- 00:07:21.572 [2024-11-26 20:09:34.315186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.572 [2024-11-26 20:09:34.315218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.572 #44 NEW cov: 12571 ft: 15264 corp: 22/985b lim: 105 exec/s: 44 rss: 74Mb L: 41/78 MS: 1 PersAutoDict- DE: "\000\000\000\037"- 00:07:21.572 [2024-11-26 20:09:34.375306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3026418949592973312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.572 [2024-11-26 20:09:34.375333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.572 #45 NEW cov: 12571 ft: 15306 corp: 23/1022b lim: 105 exec/s: 45 rss: 74Mb L: 37/78 MS: 1 InsertByte- 00:07:21.572 [2024-11-26 20:09:34.415539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.572 [2024-11-26 20:09:34.415566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.572 [2024-11-26 20:09:34.415625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.572 [2024-11-26 20:09:34.415642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.572 #46 NEW cov: 12571 ft: 15328 corp: 24/1071b lim: 105 exec/s: 46 rss: 74Mb L: 49/78 MS: 1 ChangeBit- 00:07:21.572 [2024-11-26 20:09:34.475606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.572 [2024-11-26 20:09:34.475650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.832 #47 NEW cov: 12571 ft: 15377 corp: 25/1103b lim: 105 exec/s: 47 rss: 74Mb L: 32/78 MS: 1 EraseBytes- 00:07:21.832 [2024-11-26 20:09:34.535798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:704643071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.832 [2024-11-26 20:09:34.535827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.832 #50 NEW cov: 12571 ft: 15384 corp: 26/1125b lim: 105 exec/s: 50 rss: 74Mb L: 22/78 MS: 3 ChangeByte-CrossOver-PersAutoDict- DE: "\000\000\000\037"- 00:07:21.832 [2024-11-26 20:09:34.575882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.832 [2024-11-26 20:09:34.575910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.832 #51 NEW cov: 12571 ft: 15406 corp: 27/1162b lim: 105 exec/s: 51 rss: 74Mb L: 37/78 MS: 1 CopyPart- 00:07:21.832 [2024-11-26 20:09:34.616029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.832 [2024-11-26 20:09:34.616057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.832 #52 NEW cov: 12571 ft: 15427 corp: 28/1194b lim: 105 exec/s: 52 rss: 74Mb L: 32/78 MS: 1 CopyPart- 00:07:21.832 [2024-11-26 20:09:34.676318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.832 [2024-11-26 20:09:34.676345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.832 [2024-11-26 20:09:34.676387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.832 [2024-11-26 20:09:34.676403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.832 #53 NEW cov: 12571 ft: 15492 corp: 29/1240b lim: 105 exec/s: 53 rss: 75Mb L: 46/78 MS: 1 EraseBytes- 00:07:21.832 [2024-11-26 20:09:34.736362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:704643071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.832 [2024-11-26 20:09:34.736391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.091 #54 NEW cov: 12571 ft: 15506 corp: 30/1262b lim: 105 exec/s: 54 rss: 75Mb L: 22/78 MS: 1 ShuffleBytes- 00:07:22.091 [2024-11-26 20:09:34.796501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.091 [2024-11-26 20:09:34.796530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.091 #55 NEW cov: 12571 ft: 15519 corp: 31/1303b lim: 105 exec/s: 55 rss: 75Mb L: 41/78 MS: 1 CMP- DE: "s\013\270\216\341\332\221\000"- 00:07:22.091 [2024-11-26 20:09:34.836744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.091 [2024-11-26 20:09:34.836773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.091 [2024-11-26 20:09:34.836811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:524288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.091 [2024-11-26 20:09:34.836828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.091 #56 NEW cov: 12571 ft: 15524 corp: 32/1357b lim: 105 exec/s: 56 rss: 75Mb L: 54/78 MS: 1 ChangeBinInt- 00:07:22.092 [2024-11-26 20:09:34.876740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.092 [2024-11-26 20:09:34.876768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.092 #57 NEW cov: 12571 ft: 15529 corp: 33/1393b lim: 105 exec/s: 57 rss: 75Mb L: 36/78 MS: 1 CopyPart- 00:07:22.092 [2024-11-26 20:09:34.917005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.092 [2024-11-26 20:09:34.917032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.092 [2024-11-26 20:09:34.917072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.092 [2024-11-26 20:09:34.917090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.092 #58 NEW cov: 12571 ft: 15540 corp: 34/1451b lim: 105 exec/s: 58 rss: 75Mb L: 58/78 MS: 1 CopyPart- 00:07:22.092 [2024-11-26 20:09:34.977009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.092 [2024-11-26 20:09:34.977036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.092 #59 NEW cov: 12571 ft: 15554 corp: 35/1477b lim: 105 exec/s: 29 rss: 75Mb L: 26/78 MS: 1 EraseBytes- 00:07:22.092 #59 DONE cov: 12571 ft: 15554 corp: 35/1477b lim: 105 exec/s: 29 rss: 75Mb 00:07:22.092 ###### Recommended dictionary. ###### 00:07:22.092 "\000\000\000\037" # Uses: 2 00:07:22.092 "s\013\270\216\341\332\221\000" # Uses: 0 00:07:22.092 ###### End of recommended dictionary. ###### 00:07:22.092 Done 59 runs in 2 second(s) 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.351 20:09:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:22.351 [2024-11-26 20:09:35.141816] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:22.351 [2024-11-26 20:09:35.141888] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614569 ] 00:07:22.610 [2024-11-26 20:09:35.331124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.610 [2024-11-26 20:09:35.365331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.610 [2024-11-26 20:09:35.424393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.610 [2024-11-26 20:09:35.440763] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:22.610 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.610 INFO: Seed: 3940456936 00:07:22.610 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:22.610 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:22.610 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:22.610 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.610 #2 INITED exec/s: 0 rss: 66Mb 00:07:22.610 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.610 This may also happen if the target rejected all inputs we tried so far 00:07:22.610 [2024-11-26 20:09:35.495969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:174391552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.610 [2024-11-26 20:09:35.496000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.177 NEW_FUNC[1/718]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:23.177 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.177 #30 NEW cov: 12365 ft: 12364 corp: 2/44b lim: 120 exec/s: 0 rss: 73Mb L: 43/43 MS: 3 CMP-ChangeBit-InsertRepeatedBytes- DE: "e\000"- 00:07:23.177 [2024-11-26 20:09:35.826881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:35.826924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.177 #34 NEW cov: 12478 ft: 12945 corp: 3/72b lim: 120 exec/s: 0 rss: 73Mb L: 28/43 MS: 4 InsertByte-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:23.177 [2024-11-26 20:09:35.867028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:174391552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:35.867056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.177 [2024-11-26 20:09:35.867093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:35.867110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.177 #35 NEW cov: 12484 ft: 14130 corp: 4/130b lim: 120 exec/s: 0 rss: 73Mb L: 58/58 MS: 1 CopyPart- 00:07:23.177 [2024-11-26 20:09:35.927042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:35.927071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.177 #36 NEW cov: 12569 ft: 14360 corp: 5/160b lim: 120 exec/s: 0 rss: 73Mb L: 30/58 MS: 1 PersAutoDict- DE: "e\000"- 00:07:23.177 [2024-11-26 20:09:35.987348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1095391052032 len:57921 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:35.987375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.177 [2024-11-26 20:09:35.987436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:35.987453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.177 #37 NEW cov: 12569 ft: 14494 corp: 6/211b lim: 120 exec/s: 0 rss: 73Mb L: 51/58 MS: 1 CMP- DE: "\377\220\332\342@\007\344\322"- 00:07:23.177 [2024-11-26 20:09:36.027492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:36.027519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.177 [2024-11-26 20:09:36.027557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.177 [2024-11-26 20:09:36.027573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.178 #41 NEW cov: 12569 ft: 14656 corp: 7/262b lim: 120 exec/s: 0 rss: 73Mb L: 51/58 MS: 4 CrossOver-PersAutoDict-CrossOver-InsertRepeatedBytes- DE: "e\000"- 00:07:23.178 [2024-11-26 20:09:36.067462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.178 [2024-11-26 20:09:36.067492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.436 #42 NEW cov: 12569 ft: 14778 corp: 8/291b lim: 120 exec/s: 0 rss: 73Mb L: 29/58 MS: 1 EraseBytes- 00:07:23.436 [2024-11-26 20:09:36.127869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.127896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.436 [2024-11-26 20:09:36.127935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:280995634937856 len:16392 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.127950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.436 [2024-11-26 20:09:36.128006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.128021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.436 #43 NEW cov: 12569 ft: 15183 corp: 9/370b lim: 120 exec/s: 0 rss: 73Mb L: 79/79 MS: 1 CrossOver- 00:07:23.436 [2024-11-26 20:09:36.167745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:174391552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.167773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.436 #44 NEW cov: 12569 ft: 15216 corp: 10/413b lim: 120 exec/s: 0 rss: 73Mb L: 43/79 MS: 1 ShuffleBytes- 00:07:23.436 [2024-11-26 20:09:36.207987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9223372037029167360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.208015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.436 [2024-11-26 20:09:36.208053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.208070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.436 #45 NEW cov: 12569 ft: 15257 corp: 11/471b lim: 120 exec/s: 0 rss: 74Mb L: 58/79 MS: 1 ChangeBit- 00:07:23.436 [2024-11-26 20:09:36.267985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.268016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.436 #46 NEW cov: 12569 ft: 15301 corp: 12/501b lim: 120 exec/s: 0 rss: 74Mb L: 30/79 MS: 1 ShuffleBytes- 00:07:23.436 [2024-11-26 20:09:36.308138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6629298647395729407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.436 [2024-11-26 20:09:36.308166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.437 #47 NEW cov: 12569 ft: 15332 corp: 13/532b lim: 120 exec/s: 0 rss: 74Mb L: 31/79 MS: 1 InsertByte- 00:07:23.695 [2024-11-26 20:09:36.368632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.695 [2024-11-26 20:09:36.368660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.695 [2024-11-26 20:09:36.368712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:280995634937856 len:16392 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.695 [2024-11-26 20:09:36.368729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.695 [2024-11-26 20:09:36.368783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.695 [2024-11-26 20:09:36.368798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.695 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:23.695 #48 NEW cov: 12592 ft: 15391 corp: 14/611b lim: 120 exec/s: 0 rss: 74Mb L: 79/79 MS: 1 ShuffleBytes- 00:07:23.695 [2024-11-26 20:09:36.428768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:174391552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.695 [2024-11-26 20:09:36.428795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.695 [2024-11-26 20:09:36.428835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.695 [2024-11-26 20:09:36.428850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.695 [2024-11-26 20:09:36.428907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.695 [2024-11-26 20:09:36.428937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.695 #49 NEW cov: 12592 ft: 15415 corp: 15/690b lim: 120 exec/s: 0 rss: 74Mb L: 79/79 MS: 1 CopyPart- 00:07:23.695 [2024-11-26 20:09:36.468901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:174391552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.695 [2024-11-26 20:09:36.468928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.696 [2024-11-26 20:09:36.468964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.696 [2024-11-26 20:09:36.468979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.696 [2024-11-26 20:09:36.469036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.696 [2024-11-26 20:09:36.469050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.696 #50 NEW cov: 12592 ft: 15461 corp: 16/769b lim: 120 exec/s: 50 rss: 74Mb L: 79/79 MS: 1 ChangeBit- 00:07:23.696 [2024-11-26 20:09:36.528732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2161727821339164671 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.696 [2024-11-26 20:09:36.528760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.696 #51 NEW cov: 12592 ft: 15541 corp: 17/799b lim: 120 exec/s: 51 rss: 74Mb L: 30/79 MS: 1 ChangeBinInt- 00:07:23.696 [2024-11-26 20:09:36.568856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.696 [2024-11-26 20:09:36.568883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.696 #52 NEW cov: 12592 ft: 15563 corp: 18/829b lim: 120 exec/s: 52 rss: 74Mb L: 30/79 MS: 1 ChangeByte- 00:07:23.696 [2024-11-26 20:09:36.608950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:174391552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.696 [2024-11-26 20:09:36.608977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.955 #53 NEW cov: 12592 ft: 15585 corp: 19/872b lim: 120 exec/s: 53 rss: 74Mb L: 43/79 MS: 1 ShuffleBytes- 00:07:23.955 [2024-11-26 20:09:36.649043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71935385212879104 len:2021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.649070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.955 #54 NEW cov: 12592 ft: 15590 corp: 20/915b lim: 120 exec/s: 54 rss: 74Mb L: 43/79 MS: 1 PersAutoDict- DE: "\377\220\332\342@\007\344\322"- 00:07:23.955 [2024-11-26 20:09:36.689193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.689221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.955 #55 NEW cov: 12592 ft: 15608 corp: 21/947b lim: 120 exec/s: 55 rss: 74Mb L: 32/79 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:23.955 [2024-11-26 20:09:36.729795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4702111233554596161 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.729822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.955 [2024-11-26 20:09:36.729876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4702111234474983745 len:16706 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.729892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.955 [2024-11-26 20:09:36.729947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:36028798113759489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.729961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.955 [2024-11-26 20:09:36.730018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.730033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.955 #56 NEW cov: 12592 ft: 16007 corp: 22/1054b lim: 120 exec/s: 56 rss: 74Mb L: 107/107 MS: 1 InsertRepeatedBytes- 00:07:23.955 [2024-11-26 20:09:36.789795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.789822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.955 [2024-11-26 20:09:36.789887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.789904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.955 [2024-11-26 20:09:36.789962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.789979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.955 #57 NEW cov: 12592 ft: 16021 corp: 23/1144b lim: 120 exec/s: 57 rss: 74Mb L: 90/107 MS: 1 InsertRepeatedBytes- 00:07:23.955 [2024-11-26 20:09:36.849621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910909 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.955 [2024-11-26 20:09:36.849648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.214 #58 NEW cov: 12592 ft: 16033 corp: 24/1176b lim: 120 exec/s: 58 rss: 74Mb L: 32/107 MS: 1 ChangeBit- 00:07:24.214 [2024-11-26 20:09:36.909786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71935385212879104 len:2021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:36.909814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.214 #59 NEW cov: 12592 ft: 16047 corp: 25/1219b lim: 120 exec/s: 59 rss: 74Mb L: 43/107 MS: 1 ChangeByte- 00:07:24.214 [2024-11-26 20:09:36.969937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:36.969969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.214 #60 NEW cov: 12592 ft: 16068 corp: 26/1255b lim: 120 exec/s: 60 rss: 74Mb L: 36/107 MS: 1 CopyPart- 00:07:24.214 [2024-11-26 20:09:37.010200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:37.010227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.214 [2024-11-26 20:09:37.010265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:37.010281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.214 #61 NEW cov: 12592 ft: 16086 corp: 27/1306b lim: 120 exec/s: 61 rss: 75Mb L: 51/107 MS: 1 ChangeByte- 00:07:24.214 [2024-11-26 20:09:37.070471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9223372037029167360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:37.070498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.214 [2024-11-26 20:09:37.070544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:37.070560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.214 [2024-11-26 20:09:37.070615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:37.070631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.214 #67 NEW cov: 12592 ft: 16106 corp: 28/1396b lim: 120 exec/s: 67 rss: 75Mb L: 90/107 MS: 1 InsertRepeatedBytes- 00:07:24.214 [2024-11-26 20:09:37.110314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71935385212879104 len:2021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.214 [2024-11-26 20:09:37.110342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.472 #68 NEW cov: 12592 ft: 16132 corp: 29/1439b lim: 120 exec/s: 68 rss: 75Mb L: 43/107 MS: 1 CMP- DE: "\001\221\332\342\347c\015Z"- 00:07:24.472 [2024-11-26 20:09:37.170803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9223372037029167360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.472 [2024-11-26 20:09:37.170830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.472 [2024-11-26 20:09:37.170879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.472 [2024-11-26 20:09:37.170894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.472 [2024-11-26 20:09:37.170948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.472 [2024-11-26 20:09:37.170964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.472 #69 NEW cov: 12592 ft: 16141 corp: 30/1529b lim: 120 exec/s: 69 rss: 75Mb L: 90/107 MS: 1 ChangeByte- 00:07:24.472 [2024-11-26 20:09:37.230661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.472 [2024-11-26 20:09:37.230691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.472 #70 NEW cov: 12592 ft: 16171 corp: 31/1570b lim: 120 exec/s: 70 rss: 75Mb L: 41/107 MS: 1 CrossOver- 00:07:24.472 [2024-11-26 20:09:37.291099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9223372037029167360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.472 [2024-11-26 20:09:37.291126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.472 [2024-11-26 20:09:37.291173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.291188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.473 [2024-11-26 20:09:37.291244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.291260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.473 #71 NEW cov: 12592 ft: 16206 corp: 32/1661b lim: 120 exec/s: 71 rss: 75Mb L: 91/107 MS: 1 CrossOver- 00:07:24.473 [2024-11-26 20:09:37.331452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.331481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.473 [2024-11-26 20:09:37.331527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.331543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.473 [2024-11-26 20:09:37.331595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.331615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.473 [2024-11-26 20:09:37.331671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.331685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.473 #72 NEW cov: 12592 ft: 16208 corp: 33/1761b lim: 120 exec/s: 72 rss: 75Mb L: 100/107 MS: 1 InsertRepeatedBytes- 00:07:24.473 [2024-11-26 20:09:37.391277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9223372037029167360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.391305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.473 [2024-11-26 20:09:37.391341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.473 [2024-11-26 20:09:37.391357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.732 #73 NEW cov: 12592 ft: 16255 corp: 34/1830b lim: 120 exec/s: 73 rss: 75Mb L: 69/107 MS: 1 EraseBytes- 00:07:24.732 [2024-11-26 20:09:37.451416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:174391552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.732 [2024-11-26 20:09:37.451445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.732 [2024-11-26 20:09:37.451482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.732 [2024-11-26 20:09:37.451499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.732 #74 NEW cov: 12592 ft: 16262 corp: 35/1888b lim: 120 exec/s: 74 rss: 75Mb L: 58/107 MS: 1 CrossOver- 00:07:24.732 [2024-11-26 20:09:37.491591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9223372036854775807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.732 [2024-11-26 20:09:37.491623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.732 [2024-11-26 20:09:37.491680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.732 [2024-11-26 20:09:37.491695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.732 #75 NEW cov: 12592 ft: 16309 corp: 36/1939b lim: 120 exec/s: 37 rss: 75Mb L: 51/107 MS: 1 ChangeBit- 00:07:24.732 #75 DONE cov: 12592 ft: 16309 corp: 36/1939b lim: 120 exec/s: 37 rss: 75Mb 00:07:24.732 ###### Recommended dictionary. ###### 00:07:24.732 "e\000" # Uses: 2 00:07:24.732 "\377\220\332\342@\007\344\322" # Uses: 1 00:07:24.732 "\000\000\000\000" # Uses: 0 00:07:24.732 "\001\221\332\342\347c\015Z" # Uses: 0 00:07:24.732 ###### End of recommended dictionary. ###### 00:07:24.732 Done 75 runs in 2 second(s) 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.732 20:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:24.732 [2024-11-26 20:09:37.657781] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:24.732 [2024-11-26 20:09:37.657850] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615107 ] 00:07:24.990 [2024-11-26 20:09:37.843763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.990 [2024-11-26 20:09:37.877737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.249 [2024-11-26 20:09:37.936461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.249 [2024-11-26 20:09:37.952824] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:25.249 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.249 INFO: Seed: 2157510223 00:07:25.249 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:25.249 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:25.249 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:25.249 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.249 #2 INITED exec/s: 0 rss: 65Mb 00:07:25.249 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.249 This may also happen if the target rejected all inputs we tried so far 00:07:25.249 [2024-11-26 20:09:37.997543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:25.249 [2024-11-26 20:09:37.997577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.508 NEW_FUNC[1/716]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:25.508 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.508 #9 NEW cov: 12306 ft: 12291 corp: 2/36b lim: 100 exec/s: 0 rss: 73Mb L: 35/35 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:25.508 [2024-11-26 20:09:38.348558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:25.508 [2024-11-26 20:09:38.348596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.508 [2024-11-26 20:09:38.348639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:25.508 [2024-11-26 20:09:38.348657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.508 [2024-11-26 20:09:38.348689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:25.508 [2024-11-26 20:09:38.348706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.508 [2024-11-26 20:09:38.348735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:25.508 [2024-11-26 20:09:38.348751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.508 #10 NEW cov: 12421 ft: 13262 corp: 3/129b lim: 100 exec/s: 0 rss: 73Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:07:25.767 [2024-11-26 20:09:38.448521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:25.767 [2024-11-26 20:09:38.448551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.767 #16 NEW cov: 12427 ft: 13598 corp: 4/164b lim: 100 exec/s: 0 rss: 73Mb L: 35/93 MS: 1 ChangeBit- 00:07:25.767 [2024-11-26 20:09:38.508703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:25.767 [2024-11-26 20:09:38.508735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.767 #22 NEW cov: 12512 ft: 13849 corp: 5/186b lim: 100 exec/s: 0 rss: 73Mb L: 22/93 MS: 1 EraseBytes- 00:07:25.767 [2024-11-26 20:09:38.568841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:25.767 [2024-11-26 20:09:38.568870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.767 [2024-11-26 20:09:38.568918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:25.767 [2024-11-26 20:09:38.568938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.767 #25 NEW cov: 12512 ft: 14259 corp: 6/235b lim: 100 exec/s: 0 rss: 73Mb L: 49/93 MS: 3 EraseBytes-EraseBytes-InsertRepeatedBytes- 00:07:25.767 [2024-11-26 20:09:38.628988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:25.767 [2024-11-26 20:09:38.629016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.025 #26 NEW cov: 12512 ft: 14340 corp: 7/270b lim: 100 exec/s: 0 rss: 73Mb L: 35/93 MS: 1 ChangeByte- 00:07:26.025 [2024-11-26 20:09:38.719311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.025 [2024-11-26 20:09:38.719341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.025 [2024-11-26 20:09:38.719373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.025 [2024-11-26 20:09:38.719389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.025 [2024-11-26 20:09:38.719418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:26.025 [2024-11-26 20:09:38.719433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.025 #27 NEW cov: 12512 ft: 14691 corp: 8/335b lim: 100 exec/s: 0 rss: 73Mb L: 65/93 MS: 1 InsertRepeatedBytes- 00:07:26.025 [2024-11-26 20:09:38.779357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.025 [2024-11-26 20:09:38.779387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.025 #28 NEW cov: 12512 ft: 14751 corp: 9/370b lim: 100 exec/s: 0 rss: 73Mb L: 35/93 MS: 1 ShuffleBytes- 00:07:26.025 [2024-11-26 20:09:38.869638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.025 [2024-11-26 20:09:38.869667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.025 [2024-11-26 20:09:38.869714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.025 [2024-11-26 20:09:38.869731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.025 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:26.025 #29 NEW cov: 12535 ft: 14888 corp: 10/419b lim: 100 exec/s: 0 rss: 74Mb L: 49/93 MS: 1 ChangeBit- 00:07:26.284 [2024-11-26 20:09:38.959896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.284 [2024-11-26 20:09:38.959924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.284 [2024-11-26 20:09:38.959972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.284 [2024-11-26 20:09:38.959989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.284 #30 NEW cov: 12535 ft: 15006 corp: 11/468b lim: 100 exec/s: 30 rss: 74Mb L: 49/93 MS: 1 ChangeBinInt- 00:07:26.284 [2024-11-26 20:09:39.020026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.284 [2024-11-26 20:09:39.020054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.284 #35 NEW cov: 12535 ft: 15033 corp: 12/499b lim: 100 exec/s: 35 rss: 74Mb L: 31/93 MS: 5 ChangeByte-InsertByte-InsertByte-CMP-CrossOver- DE: "\001\000\000\000"- 00:07:26.284 [2024-11-26 20:09:39.080164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.284 [2024-11-26 20:09:39.080196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.284 #36 NEW cov: 12535 ft: 15109 corp: 13/531b lim: 100 exec/s: 36 rss: 74Mb L: 32/93 MS: 1 InsertByte- 00:07:26.284 [2024-11-26 20:09:39.170393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.284 [2024-11-26 20:09:39.170421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.542 #37 NEW cov: 12535 ft: 15131 corp: 14/554b lim: 100 exec/s: 37 rss: 74Mb L: 23/93 MS: 1 InsertByte- 00:07:26.542 [2024-11-26 20:09:39.260627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.542 [2024-11-26 20:09:39.260672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.542 #38 NEW cov: 12535 ft: 15160 corp: 15/579b lim: 100 exec/s: 38 rss: 74Mb L: 25/93 MS: 1 EraseBytes- 00:07:26.542 [2024-11-26 20:09:39.310913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.542 [2024-11-26 20:09:39.310942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.542 [2024-11-26 20:09:39.310973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.542 [2024-11-26 20:09:39.310990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.542 [2024-11-26 20:09:39.311019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:26.542 [2024-11-26 20:09:39.311034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.542 [2024-11-26 20:09:39.311062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:26.542 [2024-11-26 20:09:39.311076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.542 #39 NEW cov: 12535 ft: 15190 corp: 16/672b lim: 100 exec/s: 39 rss: 74Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:07:26.542 [2024-11-26 20:09:39.370994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.542 [2024-11-26 20:09:39.371024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.542 [2024-11-26 20:09:39.371057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.542 [2024-11-26 20:09:39.371073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.542 #40 NEW cov: 12535 ft: 15208 corp: 17/721b lim: 100 exec/s: 40 rss: 74Mb L: 49/93 MS: 1 ChangeBinInt- 00:07:26.542 [2024-11-26 20:09:39.461142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.542 [2024-11-26 20:09:39.461170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.801 #41 NEW cov: 12535 ft: 15228 corp: 18/756b lim: 100 exec/s: 41 rss: 74Mb L: 35/93 MS: 1 ChangeBinInt- 00:07:26.801 [2024-11-26 20:09:39.551456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.801 [2024-11-26 20:09:39.551484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.801 [2024-11-26 20:09:39.551518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.801 [2024-11-26 20:09:39.551534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.801 #42 NEW cov: 12535 ft: 15242 corp: 19/807b lim: 100 exec/s: 42 rss: 74Mb L: 51/93 MS: 1 CopyPart- 00:07:26.801 [2024-11-26 20:09:39.611563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.801 [2024-11-26 20:09:39.611592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.801 #43 NEW cov: 12535 ft: 15304 corp: 20/834b lim: 100 exec/s: 43 rss: 74Mb L: 27/93 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:26.801 [2024-11-26 20:09:39.701804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.801 [2024-11-26 20:09:39.701833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.113 #44 NEW cov: 12535 ft: 15346 corp: 21/865b lim: 100 exec/s: 44 rss: 74Mb L: 31/93 MS: 1 CopyPart- 00:07:27.113 [2024-11-26 20:09:39.751944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.113 [2024-11-26 20:09:39.751975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.113 #45 NEW cov: 12535 ft: 15358 corp: 22/889b lim: 100 exec/s: 45 rss: 74Mb L: 24/93 MS: 1 EraseBytes- 00:07:27.113 [2024-11-26 20:09:39.842192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.113 [2024-11-26 20:09:39.842221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.113 #51 NEW cov: 12535 ft: 15386 corp: 23/912b lim: 100 exec/s: 51 rss: 74Mb L: 23/93 MS: 1 ChangeByte- 00:07:27.113 [2024-11-26 20:09:39.892309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.114 [2024-11-26 20:09:39.892336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.114 #52 NEW cov: 12535 ft: 15406 corp: 24/939b lim: 100 exec/s: 52 rss: 74Mb L: 27/93 MS: 1 ChangeBit- 00:07:27.114 [2024-11-26 20:09:39.982676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.114 [2024-11-26 20:09:39.982704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.114 [2024-11-26 20:09:39.982750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.114 [2024-11-26 20:09:39.982766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.114 [2024-11-26 20:09:39.982796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.114 [2024-11-26 20:09:39.982811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.114 [2024-11-26 20:09:39.982839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.114 [2024-11-26 20:09:39.982853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.372 #53 NEW cov: 12535 ft: 15465 corp: 25/1036b lim: 100 exec/s: 26 rss: 74Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:07:27.372 #53 DONE cov: 12535 ft: 15465 corp: 25/1036b lim: 100 exec/s: 26 rss: 74Mb 00:07:27.372 ###### Recommended dictionary. ###### 00:07:27.372 "\001\000\000\000" # Uses: 1 00:07:27.372 ###### End of recommended dictionary. ###### 00:07:27.372 Done 53 runs in 2 second(s) 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.372 20:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:27.372 [2024-11-26 20:09:40.209641] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:27.372 [2024-11-26 20:09:40.209714] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615454 ] 00:07:27.631 [2024-11-26 20:09:40.397736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.631 [2024-11-26 20:09:40.432950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.631 [2024-11-26 20:09:40.492484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.631 [2024-11-26 20:09:40.508844] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:27.631 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.631 INFO: Seed: 419535384 00:07:27.631 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:27.631 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:27.631 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:27.631 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.631 #2 INITED exec/s: 0 rss: 65Mb 00:07:27.631 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.631 This may also happen if the target rejected all inputs we tried so far 00:07:27.889 [2024-11-26 20:09:40.585166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:27.889 [2024-11-26 20:09:40.585209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.889 [2024-11-26 20:09:40.585330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:27.889 [2024-11-26 20:09:40.585353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.889 [2024-11-26 20:09:40.585473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:27.889 [2024-11-26 20:09:40.585496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.147 NEW_FUNC[1/713]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:28.147 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.147 #30 NEW cov: 12257 ft: 12265 corp: 2/36b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:07:28.147 [2024-11-26 20:09:40.925612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14323354218785064646 len:50887 00:07:28.147 [2024-11-26 20:09:40.925655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.147 NEW_FUNC[1/3]: 0xfa8ab8 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:07:28.147 NEW_FUNC[2/3]: 0x19fce28 in nvme_tcp_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:183 00:07:28.147 #31 NEW cov: 12398 ft: 13309 corp: 3/49b lim: 50 exec/s: 0 rss: 73Mb L: 13/35 MS: 1 InsertRepeatedBytes- 00:07:28.147 [2024-11-26 20:09:40.985774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14323354218785064646 len:50887 00:07:28.147 [2024-11-26 20:09:40.985807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.147 #32 NEW cov: 12404 ft: 13450 corp: 4/62b lim: 50 exec/s: 0 rss: 74Mb L: 13/35 MS: 1 ChangeByte- 00:07:28.147 [2024-11-26 20:09:41.055976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14267418763745216198 len:50887 00:07:28.147 [2024-11-26 20:09:41.056008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.407 #33 NEW cov: 12489 ft: 13708 corp: 5/75b lim: 50 exec/s: 0 rss: 74Mb L: 13/35 MS: 1 ChangeBinInt- 00:07:28.407 [2024-11-26 20:09:41.106357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1793 00:07:28.407 [2024-11-26 20:09:41.106392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.407 [2024-11-26 20:09:41.106486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:28.407 [2024-11-26 20:09:41.106510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.407 [2024-11-26 20:09:41.106642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.407 [2024-11-26 20:09:41.106667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.407 #34 NEW cov: 12489 ft: 13762 corp: 6/112b lim: 50 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 CMP- DE: "\007\000"- 00:07:28.407 [2024-11-26 20:09:41.166582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:28.407 [2024-11-26 20:09:41.166617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.407 [2024-11-26 20:09:41.166684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:28.407 [2024-11-26 20:09:41.166705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.407 [2024-11-26 20:09:41.166817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.407 [2024-11-26 20:09:41.166840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.407 #35 NEW cov: 12489 ft: 13889 corp: 7/149b lim: 50 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:28.407 [2024-11-26 20:09:41.226418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:992699321768396288 len:50887 00:07:28.407 [2024-11-26 20:09:41.226447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.407 #36 NEW cov: 12489 ft: 13935 corp: 8/160b lim: 50 exec/s: 0 rss: 74Mb L: 11/37 MS: 1 EraseBytes- 00:07:28.407 [2024-11-26 20:09:41.296869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504403158265495552 len:1 00:07:28.407 [2024-11-26 20:09:41.296901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.407 [2024-11-26 20:09:41.296981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:117440512 len:1 00:07:28.407 [2024-11-26 20:09:41.297009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.407 [2024-11-26 20:09:41.297122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.407 [2024-11-26 20:09:41.297148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.407 #37 NEW cov: 12489 ft: 14004 corp: 9/199b lim: 50 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 PersAutoDict- DE: "\007\000"- 00:07:28.666 [2024-11-26 20:09:41.346806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14323354219053500102 len:50887 00:07:28.666 [2024-11-26 20:09:41.346843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.666 #38 NEW cov: 12489 ft: 14014 corp: 10/212b lim: 50 exec/s: 0 rss: 74Mb L: 13/39 MS: 1 ChangeBit- 00:07:28.666 [2024-11-26 20:09:41.387132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504403158265495552 len:1 00:07:28.666 [2024-11-26 20:09:41.387166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.387264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:117440512 len:1 00:07:28.666 [2024-11-26 20:09:41.387285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.387396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.666 [2024-11-26 20:09:41.387416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.666 #39 NEW cov: 12489 ft: 14046 corp: 11/251b lim: 50 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 CopyPart- 00:07:28.666 [2024-11-26 20:09:41.457434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:166026255794176 len:1 00:07:28.666 [2024-11-26 20:09:41.457468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.457561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:28.666 [2024-11-26 20:09:41.457583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.457698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.666 [2024-11-26 20:09:41.457724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.666 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:28.666 #40 NEW cov: 12512 ft: 14130 corp: 12/286b lim: 50 exec/s: 0 rss: 74Mb L: 35/39 MS: 1 ChangeByte- 00:07:28.666 [2024-11-26 20:09:41.497512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504569184521355264 len:1 00:07:28.666 [2024-11-26 20:09:41.497551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.497653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:28.666 [2024-11-26 20:09:41.497672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.497792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.666 [2024-11-26 20:09:41.497824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.666 #41 NEW cov: 12512 ft: 14224 corp: 13/321b lim: 50 exec/s: 0 rss: 74Mb L: 35/39 MS: 1 ChangeBinInt- 00:07:28.666 [2024-11-26 20:09:41.567747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504569184521355264 len:1 00:07:28.666 [2024-11-26 20:09:41.567783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.567887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:200 00:07:28.666 [2024-11-26 20:09:41.567910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.666 [2024-11-26 20:09:41.568041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.666 [2024-11-26 20:09:41.568064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.926 #42 NEW cov: 12512 ft: 14300 corp: 14/357b lim: 50 exec/s: 42 rss: 74Mb L: 36/39 MS: 1 InsertByte- 00:07:28.926 [2024-11-26 20:09:41.627875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504569184521355264 len:1 00:07:28.926 [2024-11-26 20:09:41.627912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.628003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:200 00:07:28.926 [2024-11-26 20:09:41.628028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.628149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14323354218604316358 len:1 00:07:28.926 [2024-11-26 20:09:41.628174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.926 #43 NEW cov: 12512 ft: 14341 corp: 15/393b lim: 50 exec/s: 43 rss: 74Mb L: 36/39 MS: 1 CrossOver- 00:07:28.926 [2024-11-26 20:09:41.698129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:28.926 [2024-11-26 20:09:41.698166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.698270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9851624184872960 len:1 00:07:28.926 [2024-11-26 20:09:41.698296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.698411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.926 [2024-11-26 20:09:41.698433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.926 #44 NEW cov: 12512 ft: 14354 corp: 16/428b lim: 50 exec/s: 44 rss: 74Mb L: 35/39 MS: 1 ChangeBinInt- 00:07:28.926 [2024-11-26 20:09:41.748297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:28.926 [2024-11-26 20:09:41.748332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.748449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9851624184872960 len:155 00:07:28.926 [2024-11-26 20:09:41.748469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.748580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.926 [2024-11-26 20:09:41.748606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.926 #45 NEW cov: 12512 ft: 14406 corp: 17/463b lim: 50 exec/s: 45 rss: 74Mb L: 35/39 MS: 1 ChangeByte- 00:07:28.926 [2024-11-26 20:09:41.818506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:166026255794176 len:1 00:07:28.926 [2024-11-26 20:09:41.818538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.818659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:28.926 [2024-11-26 20:09:41.818686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.926 [2024-11-26 20:09:41.818808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:28.926 [2024-11-26 20:09:41.818833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.926 #46 NEW cov: 12512 ft: 14422 corp: 18/499b lim: 50 exec/s: 46 rss: 75Mb L: 36/39 MS: 1 InsertByte- 00:07:29.185 [2024-11-26 20:09:41.868389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14323354266029704902 len:50887 00:07:29.185 [2024-11-26 20:09:41.868419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.185 #47 NEW cov: 12512 ft: 14502 corp: 19/513b lim: 50 exec/s: 47 rss: 75Mb L: 14/39 MS: 1 InsertByte- 00:07:29.185 [2024-11-26 20:09:41.918818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:29.185 [2024-11-26 20:09:41.918853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.185 [2024-11-26 20:09:41.918971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:43347146415734784 len:1 00:07:29.185 [2024-11-26 20:09:41.918995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.185 [2024-11-26 20:09:41.919113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:36619 00:07:29.186 [2024-11-26 20:09:41.919134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.186 #48 NEW cov: 12512 ft: 14526 corp: 20/544b lim: 50 exec/s: 48 rss: 75Mb L: 31/39 MS: 1 EraseBytes- 00:07:29.186 [2024-11-26 20:09:41.988972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:117440512 len:38657 00:07:29.186 [2024-11-26 20:09:41.989011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.186 [2024-11-26 20:09:41.989123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:29.186 [2024-11-26 20:09:41.989146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.186 [2024-11-26 20:09:41.989266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:29.186 [2024-11-26 20:09:41.989295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.186 #49 NEW cov: 12512 ft: 14550 corp: 21/581b lim: 50 exec/s: 49 rss: 75Mb L: 37/39 MS: 1 PersAutoDict- DE: "\007\000"- 00:07:29.186 [2024-11-26 20:09:42.039227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504403158265495552 len:1 00:07:29.186 [2024-11-26 20:09:42.039261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.186 [2024-11-26 20:09:42.039381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:117440512 len:1 00:07:29.186 [2024-11-26 20:09:42.039403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.186 [2024-11-26 20:09:42.039527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:504403158265495552 len:1 00:07:29.186 [2024-11-26 20:09:42.039551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.186 #50 NEW cov: 12512 ft: 14557 corp: 22/620b lim: 50 exec/s: 50 rss: 75Mb L: 39/39 MS: 1 CrossOver- 00:07:29.186 [2024-11-26 20:09:42.089351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1793 00:07:29.186 [2024-11-26 20:09:42.089382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.186 [2024-11-26 20:09:42.089505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:21248 len:1 00:07:29.186 [2024-11-26 20:09:42.089527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.186 [2024-11-26 20:09:42.089651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:29.186 [2024-11-26 20:09:42.089675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.186 #51 NEW cov: 12512 ft: 14592 corp: 23/657b lim: 50 exec/s: 51 rss: 75Mb L: 37/39 MS: 1 ChangeByte- 00:07:29.445 [2024-11-26 20:09:42.129399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504569184521355264 len:1 00:07:29.445 [2024-11-26 20:09:42.129430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.445 [2024-11-26 20:09:42.129544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:29.445 [2024-11-26 20:09:42.129566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.445 [2024-11-26 20:09:42.129681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:29.445 [2024-11-26 20:09:42.129704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.445 #52 NEW cov: 12512 ft: 14632 corp: 24/696b lim: 50 exec/s: 52 rss: 75Mb L: 39/39 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:29.445 [2024-11-26 20:09:42.169686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:117440512 len:1 00:07:29.445 [2024-11-26 20:09:42.169717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.445 [2024-11-26 20:09:42.169778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:151 len:1 00:07:29.445 [2024-11-26 20:09:42.169796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.445 [2024-11-26 20:09:42.169912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:29.445 [2024-11-26 20:09:42.169935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.445 [2024-11-26 20:09:42.170044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:144 00:07:29.445 [2024-11-26 20:09:42.170065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.445 #53 NEW cov: 12512 ft: 14898 corp: 25/738b lim: 50 exec/s: 53 rss: 75Mb L: 42/42 MS: 1 CrossOver- 00:07:29.445 [2024-11-26 20:09:42.239536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:29.445 [2024-11-26 20:09:42.239567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.445 [2024-11-26 20:09:42.239692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:144 00:07:29.445 [2024-11-26 20:09:42.239717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.445 #54 NEW cov: 12512 ft: 15154 corp: 26/760b lim: 50 exec/s: 54 rss: 75Mb L: 22/42 MS: 1 EraseBytes- 00:07:29.445 [2024-11-26 20:09:42.279507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14323354218785064646 len:50887 00:07:29.445 [2024-11-26 20:09:42.279533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.445 #55 NEW cov: 12512 ft: 15178 corp: 27/773b lim: 50 exec/s: 55 rss: 75Mb L: 13/42 MS: 1 ShuffleBytes- 00:07:29.445 [2024-11-26 20:09:42.349868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:504403158265495552 len:1 00:07:29.445 [2024-11-26 20:09:42.349900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.445 [2024-11-26 20:09:42.349991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:117440512 len:1 00:07:29.445 [2024-11-26 20:09:42.350014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.704 #56 NEW cov: 12512 ft: 15215 corp: 28/796b lim: 50 exec/s: 56 rss: 75Mb L: 23/42 MS: 1 EraseBytes- 00:07:29.704 [2024-11-26 20:09:42.420010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:992699321757058560 len:50887 00:07:29.704 [2024-11-26 20:09:42.420038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.704 #57 NEW cov: 12512 ft: 15228 corp: 29/807b lim: 50 exec/s: 57 rss: 75Mb L: 11/42 MS: 1 ChangeByte- 00:07:29.704 [2024-11-26 20:09:42.490480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:166026255794176 len:1 00:07:29.704 [2024-11-26 20:09:42.490516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.704 [2024-11-26 20:09:42.490630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:29.704 [2024-11-26 20:09:42.490655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.704 [2024-11-26 20:09:42.490778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4278190079 len:1 00:07:29.704 [2024-11-26 20:09:42.490806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.704 #58 NEW cov: 12512 ft: 15239 corp: 30/843b lim: 50 exec/s: 58 rss: 75Mb L: 36/42 MS: 1 ChangeBinInt- 00:07:29.704 [2024-11-26 20:09:42.560713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:32257 00:07:29.704 [2024-11-26 20:09:42.560753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.704 [2024-11-26 20:09:42.560853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:29.704 [2024-11-26 20:09:42.560875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.704 [2024-11-26 20:09:42.560994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:29.704 [2024-11-26 20:09:42.561016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.704 #59 NEW cov: 12512 ft: 15250 corp: 31/880b lim: 50 exec/s: 29 rss: 75Mb L: 37/42 MS: 1 ChangeByte- 00:07:29.704 #59 DONE cov: 12512 ft: 15250 corp: 31/880b lim: 50 exec/s: 29 rss: 75Mb 00:07:29.704 ###### Recommended dictionary. ###### 00:07:29.704 "\007\000" # Uses: 2 00:07:29.704 "\000\000\000\000" # Uses: 1 00:07:29.704 ###### End of recommended dictionary. ###### 00:07:29.704 Done 59 runs in 2 second(s) 00:07:29.963 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.963 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.963 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.963 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:29.963 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.964 20:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:29.964 [2024-11-26 20:09:42.722589] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:29.964 [2024-11-26 20:09:42.722673] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615926 ] 00:07:30.222 [2024-11-26 20:09:42.910654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.222 [2024-11-26 20:09:42.944899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.222 [2024-11-26 20:09:43.003708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.222 [2024-11-26 20:09:43.020008] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:30.222 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.222 INFO: Seed: 2930528559 00:07:30.222 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:30.222 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:30.222 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:30.222 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.222 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.222 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.222 This may also happen if the target rejected all inputs we tried so far 00:07:30.222 [2024-11-26 20:09:43.086472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.222 [2024-11-26 20:09:43.086503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.222 [2024-11-26 20:09:43.086628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:30.222 [2024-11-26 20:09:43.086651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.481 NEW_FUNC[1/718]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:30.481 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.481 #3 NEW cov: 12344 ft: 12345 corp: 2/45b lim: 90 exec/s: 0 rss: 72Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:30.740 [2024-11-26 20:09:43.417247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.740 [2024-11-26 20:09:43.417283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.740 [2024-11-26 20:09:43.417426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:30.740 [2024-11-26 20:09:43.417456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.740 #4 NEW cov: 12457 ft: 12769 corp: 3/89b lim: 90 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 CopyPart- 00:07:30.740 [2024-11-26 20:09:43.487493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.740 [2024-11-26 20:09:43.487527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.740 [2024-11-26 20:09:43.487658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:30.740 [2024-11-26 20:09:43.487684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.740 #5 NEW cov: 12463 ft: 13177 corp: 4/133b lim: 90 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 CopyPart- 00:07:30.740 [2024-11-26 20:09:43.537560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.740 [2024-11-26 20:09:43.537593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.740 [2024-11-26 20:09:43.537723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:30.740 [2024-11-26 20:09:43.537750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.740 #6 NEW cov: 12548 ft: 13474 corp: 5/177b lim: 90 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 ChangeBinInt- 00:07:30.740 [2024-11-26 20:09:43.607425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.740 [2024-11-26 20:09:43.607459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.740 #7 NEW cov: 12548 ft: 14256 corp: 6/205b lim: 90 exec/s: 0 rss: 73Mb L: 28/44 MS: 1 EraseBytes- 00:07:30.740 [2024-11-26 20:09:43.657977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.740 [2024-11-26 20:09:43.658005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.740 [2024-11-26 20:09:43.658130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:30.740 [2024-11-26 20:09:43.658158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.999 #8 NEW cov: 12548 ft: 14453 corp: 7/252b lim: 90 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:07:30.999 [2024-11-26 20:09:43.727839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.999 [2024-11-26 20:09:43.727867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.999 #9 NEW cov: 12548 ft: 14664 corp: 8/282b lim: 90 exec/s: 0 rss: 73Mb L: 30/47 MS: 1 CopyPart- 00:07:30.999 [2024-11-26 20:09:43.778554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.999 [2024-11-26 20:09:43.778589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.999 [2024-11-26 20:09:43.778631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:30.999 [2024-11-26 20:09:43.778642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.999 [2024-11-26 20:09:43.778676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:30.999 [2024-11-26 20:09:43.778696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.999 #10 NEW cov: 12548 ft: 14988 corp: 9/350b lim: 90 exec/s: 0 rss: 73Mb L: 68/68 MS: 1 InsertRepeatedBytes- 00:07:30.999 [2024-11-26 20:09:43.848218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.999 [2024-11-26 20:09:43.848250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.999 #11 NEW cov: 12548 ft: 15020 corp: 10/378b lim: 90 exec/s: 0 rss: 73Mb L: 28/68 MS: 1 ChangeByte- 00:07:30.999 [2024-11-26 20:09:43.898479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:30.999 [2024-11-26 20:09:43.898504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.258 #12 NEW cov: 12548 ft: 15094 corp: 11/407b lim: 90 exec/s: 0 rss: 73Mb L: 29/68 MS: 1 InsertByte- 00:07:31.258 [2024-11-26 20:09:43.969158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.258 [2024-11-26 20:09:43.969189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.258 [2024-11-26 20:09:43.969284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.258 [2024-11-26 20:09:43.969306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.258 [2024-11-26 20:09:43.969434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:31.258 [2024-11-26 20:09:43.969458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.258 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:31.258 #13 NEW cov: 12571 ft: 15155 corp: 12/476b lim: 90 exec/s: 0 rss: 73Mb L: 69/69 MS: 1 InsertByte- 00:07:31.258 [2024-11-26 20:09:44.038769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.258 [2024-11-26 20:09:44.038795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.258 #14 NEW cov: 12571 ft: 15180 corp: 13/504b lim: 90 exec/s: 14 rss: 73Mb L: 28/69 MS: 1 ChangeByte- 00:07:31.258 [2024-11-26 20:09:44.089491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.258 [2024-11-26 20:09:44.089522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.258 [2024-11-26 20:09:44.089602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.258 [2024-11-26 20:09:44.089623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.258 [2024-11-26 20:09:44.089749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:31.258 [2024-11-26 20:09:44.089770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.258 #15 NEW cov: 12571 ft: 15229 corp: 14/573b lim: 90 exec/s: 15 rss: 74Mb L: 69/69 MS: 1 ChangeBinInt- 00:07:31.258 [2024-11-26 20:09:44.159437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.258 [2024-11-26 20:09:44.159467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.258 [2024-11-26 20:09:44.159592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.258 [2024-11-26 20:09:44.159622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.258 #16 NEW cov: 12571 ft: 15263 corp: 15/618b lim: 90 exec/s: 16 rss: 74Mb L: 45/69 MS: 1 InsertByte- 00:07:31.516 [2024-11-26 20:09:44.209614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.516 [2024-11-26 20:09:44.209648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.516 [2024-11-26 20:09:44.209773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.516 [2024-11-26 20:09:44.209797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.516 #17 NEW cov: 12571 ft: 15324 corp: 16/662b lim: 90 exec/s: 17 rss: 74Mb L: 44/69 MS: 1 ShuffleBytes- 00:07:31.517 [2024-11-26 20:09:44.260296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.517 [2024-11-26 20:09:44.260328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.517 [2024-11-26 20:09:44.260396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.517 [2024-11-26 20:09:44.260417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.517 [2024-11-26 20:09:44.260549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:31.517 [2024-11-26 20:09:44.260573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.517 [2024-11-26 20:09:44.260697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:31.517 [2024-11-26 20:09:44.260718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.517 #18 NEW cov: 12571 ft: 15702 corp: 17/739b lim: 90 exec/s: 18 rss: 74Mb L: 77/77 MS: 1 InsertRepeatedBytes- 00:07:31.517 [2024-11-26 20:09:44.309582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.517 [2024-11-26 20:09:44.309620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.517 #19 NEW cov: 12571 ft: 15828 corp: 18/769b lim: 90 exec/s: 19 rss: 74Mb L: 30/77 MS: 1 ChangeBinInt- 00:07:31.517 [2024-11-26 20:09:44.379896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.517 [2024-11-26 20:09:44.379922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.517 #20 NEW cov: 12571 ft: 15843 corp: 19/799b lim: 90 exec/s: 20 rss: 74Mb L: 30/77 MS: 1 ShuffleBytes- 00:07:31.775 [2024-11-26 20:09:44.450369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.775 [2024-11-26 20:09:44.450403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.775 [2024-11-26 20:09:44.450520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.775 [2024-11-26 20:09:44.450545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.775 #25 NEW cov: 12571 ft: 15871 corp: 20/852b lim: 90 exec/s: 25 rss: 74Mb L: 53/77 MS: 5 ChangeBinInt-InsertRepeatedBytes-InsertByte-CopyPart-InsertRepeatedBytes- 00:07:31.775 [2024-11-26 20:09:44.500968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.775 [2024-11-26 20:09:44.501000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.776 [2024-11-26 20:09:44.501072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.776 [2024-11-26 20:09:44.501094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.776 [2024-11-26 20:09:44.501217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:31.776 [2024-11-26 20:09:44.501239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.776 [2024-11-26 20:09:44.501373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:31.776 [2024-11-26 20:09:44.501393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.776 #26 NEW cov: 12571 ft: 15920 corp: 21/939b lim: 90 exec/s: 26 rss: 74Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:31.776 [2024-11-26 20:09:44.570766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.776 [2024-11-26 20:09:44.570800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.776 [2024-11-26 20:09:44.570928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.776 [2024-11-26 20:09:44.570949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.776 #27 NEW cov: 12571 ft: 15936 corp: 22/984b lim: 90 exec/s: 27 rss: 74Mb L: 45/87 MS: 1 CopyPart- 00:07:31.776 [2024-11-26 20:09:44.620815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.776 [2024-11-26 20:09:44.620849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.776 [2024-11-26 20:09:44.620973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.776 [2024-11-26 20:09:44.620999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.776 #28 NEW cov: 12571 ft: 15956 corp: 23/1032b lim: 90 exec/s: 28 rss: 74Mb L: 48/87 MS: 1 InsertByte- 00:07:31.776 [2024-11-26 20:09:44.691106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.776 [2024-11-26 20:09:44.691143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.776 [2024-11-26 20:09:44.691273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.776 [2024-11-26 20:09:44.691297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.035 #29 NEW cov: 12571 ft: 15966 corp: 24/1076b lim: 90 exec/s: 29 rss: 74Mb L: 44/87 MS: 1 ChangeBit- 00:07:32.035 [2024-11-26 20:09:44.741234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.035 [2024-11-26 20:09:44.741262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.035 [2024-11-26 20:09:44.741379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.035 [2024-11-26 20:09:44.741404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.035 #30 NEW cov: 12571 ft: 15980 corp: 25/1121b lim: 90 exec/s: 30 rss: 74Mb L: 45/87 MS: 1 CMP- DE: "\377\003"- 00:07:32.035 [2024-11-26 20:09:44.811455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.035 [2024-11-26 20:09:44.811489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.035 [2024-11-26 20:09:44.811631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.035 [2024-11-26 20:09:44.811655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.035 #31 NEW cov: 12571 ft: 15991 corp: 26/1165b lim: 90 exec/s: 31 rss: 74Mb L: 44/87 MS: 1 CMP- DE: "\014\000\000\000\000\000\000\000"- 00:07:32.035 [2024-11-26 20:09:44.881405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.035 [2024-11-26 20:09:44.881439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.035 #32 NEW cov: 12571 ft: 16008 corp: 27/1195b lim: 90 exec/s: 32 rss: 74Mb L: 30/87 MS: 1 ChangeByte- 00:07:32.035 [2024-11-26 20:09:44.931660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.035 [2024-11-26 20:09:44.931693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.035 #33 NEW cov: 12571 ft: 16118 corp: 28/1229b lim: 90 exec/s: 33 rss: 74Mb L: 34/87 MS: 1 EraseBytes- 00:07:32.294 [2024-11-26 20:09:44.981560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.294 [2024-11-26 20:09:44.981588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.294 #34 NEW cov: 12571 ft: 16153 corp: 29/1255b lim: 90 exec/s: 34 rss: 74Mb L: 26/87 MS: 1 EraseBytes- 00:07:32.294 [2024-11-26 20:09:45.051849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.294 [2024-11-26 20:09:45.051878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.294 #35 NEW cov: 12571 ft: 16159 corp: 30/1283b lim: 90 exec/s: 17 rss: 74Mb L: 28/87 MS: 1 ShuffleBytes- 00:07:32.294 #35 DONE cov: 12571 ft: 16159 corp: 30/1283b lim: 90 exec/s: 17 rss: 74Mb 00:07:32.294 ###### Recommended dictionary. ###### 00:07:32.294 "\377\003" # Uses: 0 00:07:32.294 "\014\000\000\000\000\000\000\000" # Uses: 0 00:07:32.294 ###### End of recommended dictionary. ###### 00:07:32.294 Done 35 runs in 2 second(s) 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.294 20:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:32.294 [2024-11-26 20:09:45.217185] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:32.294 [2024-11-26 20:09:45.217261] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616454 ] 00:07:32.553 [2024-11-26 20:09:45.405587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.553 [2024-11-26 20:09:45.439334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.813 [2024-11-26 20:09:45.498229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.813 [2024-11-26 20:09:45.514559] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:32.813 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.813 INFO: Seed: 1129576168 00:07:32.813 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:32.813 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:32.813 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:32.813 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.813 #2 INITED exec/s: 0 rss: 65Mb 00:07:32.813 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.813 This may also happen if the target rejected all inputs we tried so far 00:07:32.813 [2024-11-26 20:09:45.560348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.813 [2024-11-26 20:09:45.560378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.813 [2024-11-26 20:09:45.560441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.813 [2024-11-26 20:09:45.560460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.813 [2024-11-26 20:09:45.560516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.813 [2024-11-26 20:09:45.560532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.813 [2024-11-26 20:09:45.560586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:32.813 [2024-11-26 20:09:45.560605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.813 [2024-11-26 20:09:45.560663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:32.813 [2024-11-26 20:09:45.560677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:33.073 NEW_FUNC[1/718]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:33.073 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.074 #11 NEW cov: 12319 ft: 12310 corp: 2/51b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 4 CrossOver-ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:33.074 [2024-11-26 20:09:45.880730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.074 [2024-11-26 20:09:45.880763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.074 [2024-11-26 20:09:45.880837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.074 [2024-11-26 20:09:45.880854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.074 #12 NEW cov: 12432 ft: 13362 corp: 3/78b lim: 50 exec/s: 0 rss: 73Mb L: 27/50 MS: 1 EraseBytes- 00:07:33.074 [2024-11-26 20:09:45.940983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.074 [2024-11-26 20:09:45.941010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.074 [2024-11-26 20:09:45.941066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.074 [2024-11-26 20:09:45.941081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.074 [2024-11-26 20:09:45.941137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.074 [2024-11-26 20:09:45.941153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.074 #17 NEW cov: 12438 ft: 13859 corp: 4/116b lim: 50 exec/s: 0 rss: 73Mb L: 38/50 MS: 5 ChangeBinInt-ChangeBit-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:33.074 [2024-11-26 20:09:45.980946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.074 [2024-11-26 20:09:45.980972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.074 [2024-11-26 20:09:45.981018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.074 [2024-11-26 20:09:45.981034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.334 #18 NEW cov: 12523 ft: 14048 corp: 5/143b lim: 50 exec/s: 0 rss: 73Mb L: 27/50 MS: 1 ChangeByte- 00:07:33.334 [2024-11-26 20:09:46.041100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.334 [2024-11-26 20:09:46.041131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.334 [2024-11-26 20:09:46.041176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.334 [2024-11-26 20:09:46.041191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.334 #19 NEW cov: 12523 ft: 14229 corp: 6/170b lim: 50 exec/s: 0 rss: 73Mb L: 27/50 MS: 1 ChangeByte- 00:07:33.334 [2024-11-26 20:09:46.101771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.335 [2024-11-26 20:09:46.101798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.101852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.335 [2024-11-26 20:09:46.101869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.101925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.335 [2024-11-26 20:09:46.101941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.101996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:33.335 [2024-11-26 20:09:46.102012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.102069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:33.335 [2024-11-26 20:09:46.102085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:33.335 #20 NEW cov: 12523 ft: 14319 corp: 7/220b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeByte- 00:07:33.335 [2024-11-26 20:09:46.141475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.335 [2024-11-26 20:09:46.141500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.141565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.335 [2024-11-26 20:09:46.141581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.141644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.335 [2024-11-26 20:09:46.141660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.335 #21 NEW cov: 12523 ft: 14386 corp: 8/253b lim: 50 exec/s: 0 rss: 73Mb L: 33/50 MS: 1 EraseBytes- 00:07:33.335 [2024-11-26 20:09:46.181442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.335 [2024-11-26 20:09:46.181468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.181522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.335 [2024-11-26 20:09:46.181539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.335 #22 NEW cov: 12523 ft: 14448 corp: 9/274b lim: 50 exec/s: 0 rss: 73Mb L: 21/50 MS: 1 EraseBytes- 00:07:33.335 [2024-11-26 20:09:46.241638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.335 [2024-11-26 20:09:46.241665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.335 [2024-11-26 20:09:46.241703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.335 [2024-11-26 20:09:46.241719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.596 #23 NEW cov: 12523 ft: 14532 corp: 10/301b lim: 50 exec/s: 0 rss: 74Mb L: 27/50 MS: 1 ChangeBit- 00:07:33.596 [2024-11-26 20:09:46.282213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.596 [2024-11-26 20:09:46.282242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.282298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.596 [2024-11-26 20:09:46.282315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.282371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.596 [2024-11-26 20:09:46.282387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.282442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:33.596 [2024-11-26 20:09:46.282457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.282515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:33.596 [2024-11-26 20:09:46.282529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:33.596 #24 NEW cov: 12523 ft: 14594 corp: 11/351b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:33.596 [2024-11-26 20:09:46.341931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.596 [2024-11-26 20:09:46.341960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.342007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.596 [2024-11-26 20:09:46.342023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.596 #25 NEW cov: 12523 ft: 14660 corp: 12/378b lim: 50 exec/s: 0 rss: 74Mb L: 27/50 MS: 1 ChangeBinInt- 00:07:33.596 [2024-11-26 20:09:46.402101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.596 [2024-11-26 20:09:46.402130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.402181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.596 [2024-11-26 20:09:46.402198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.596 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:33.596 #26 NEW cov: 12546 ft: 14676 corp: 13/399b lim: 50 exec/s: 0 rss: 74Mb L: 21/50 MS: 1 CopyPart- 00:07:33.596 [2024-11-26 20:09:46.462429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.596 [2024-11-26 20:09:46.462456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.462492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.596 [2024-11-26 20:09:46.462509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.462569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.596 [2024-11-26 20:09:46.462586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.596 #27 NEW cov: 12546 ft: 14751 corp: 14/432b lim: 50 exec/s: 0 rss: 74Mb L: 33/50 MS: 1 CrossOver- 00:07:33.596 [2024-11-26 20:09:46.502494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.596 [2024-11-26 20:09:46.502522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.502581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.596 [2024-11-26 20:09:46.502601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.596 [2024-11-26 20:09:46.502661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.596 [2024-11-26 20:09:46.502677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.596 #30 NEW cov: 12546 ft: 14768 corp: 15/467b lim: 50 exec/s: 0 rss: 74Mb L: 35/50 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:33.856 [2024-11-26 20:09:46.542604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.856 [2024-11-26 20:09:46.542632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.542680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.856 [2024-11-26 20:09:46.542696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.542753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.856 [2024-11-26 20:09:46.542769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.856 #31 NEW cov: 12546 ft: 14834 corp: 16/500b lim: 50 exec/s: 31 rss: 74Mb L: 33/50 MS: 1 ChangeBit- 00:07:33.856 [2024-11-26 20:09:46.582560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.856 [2024-11-26 20:09:46.582590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.582641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.856 [2024-11-26 20:09:46.582659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.856 #32 NEW cov: 12546 ft: 14845 corp: 17/523b lim: 50 exec/s: 32 rss: 74Mb L: 23/50 MS: 1 EraseBytes- 00:07:33.856 [2024-11-26 20:09:46.622812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.856 [2024-11-26 20:09:46.622839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.622903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.856 [2024-11-26 20:09:46.622920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.622978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.856 [2024-11-26 20:09:46.622994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.856 #33 NEW cov: 12546 ft: 14859 corp: 18/556b lim: 50 exec/s: 33 rss: 74Mb L: 33/50 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:33.856 [2024-11-26 20:09:46.682889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.856 [2024-11-26 20:09:46.682915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.682955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.856 [2024-11-26 20:09:46.682971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.856 #34 NEW cov: 12546 ft: 14895 corp: 19/582b lim: 50 exec/s: 34 rss: 74Mb L: 26/50 MS: 1 EraseBytes- 00:07:33.856 [2024-11-26 20:09:46.743139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.856 [2024-11-26 20:09:46.743165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.743201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.856 [2024-11-26 20:09:46.743216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.743274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.856 [2024-11-26 20:09:46.743290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.856 #35 NEW cov: 12546 ft: 14907 corp: 20/615b lim: 50 exec/s: 35 rss: 74Mb L: 33/50 MS: 1 ShuffleBytes- 00:07:33.856 [2024-11-26 20:09:46.783285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:33.856 [2024-11-26 20:09:46.783313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.783364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:33.856 [2024-11-26 20:09:46.783379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.856 [2024-11-26 20:09:46.783436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:33.856 [2024-11-26 20:09:46.783454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.116 #36 NEW cov: 12546 ft: 14936 corp: 21/648b lim: 50 exec/s: 36 rss: 74Mb L: 33/50 MS: 1 ChangeBit- 00:07:34.116 [2024-11-26 20:09:46.823197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.116 [2024-11-26 20:09:46.823223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.116 [2024-11-26 20:09:46.823262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.116 [2024-11-26 20:09:46.823279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.116 #37 NEW cov: 12546 ft: 14944 corp: 22/675b lim: 50 exec/s: 37 rss: 74Mb L: 27/50 MS: 1 CrossOver- 00:07:34.116 [2024-11-26 20:09:46.883676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.116 [2024-11-26 20:09:46.883704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.116 [2024-11-26 20:09:46.883770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.116 [2024-11-26 20:09:46.883786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.116 [2024-11-26 20:09:46.883841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.116 [2024-11-26 20:09:46.883860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.116 [2024-11-26 20:09:46.883916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.116 [2024-11-26 20:09:46.883933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.116 #38 NEW cov: 12546 ft: 14962 corp: 23/722b lim: 50 exec/s: 38 rss: 74Mb L: 47/50 MS: 1 InsertRepeatedBytes- 00:07:34.116 [2024-11-26 20:09:46.943842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.117 [2024-11-26 20:09:46.943868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.117 [2024-11-26 20:09:46.943940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.117 [2024-11-26 20:09:46.943956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.117 [2024-11-26 20:09:46.944009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.117 [2024-11-26 20:09:46.944026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.117 [2024-11-26 20:09:46.944082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.117 [2024-11-26 20:09:46.944098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.117 #39 NEW cov: 12546 ft: 14976 corp: 24/764b lim: 50 exec/s: 39 rss: 75Mb L: 42/50 MS: 1 CrossOver- 00:07:34.117 [2024-11-26 20:09:47.004171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.117 [2024-11-26 20:09:47.004197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.117 [2024-11-26 20:09:47.004270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.117 [2024-11-26 20:09:47.004285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.117 [2024-11-26 20:09:47.004341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.117 [2024-11-26 20:09:47.004356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.117 [2024-11-26 20:09:47.004411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.117 [2024-11-26 20:09:47.004426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.117 [2024-11-26 20:09:47.004483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:34.117 [2024-11-26 20:09:47.004499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:34.117 #40 NEW cov: 12546 ft: 14988 corp: 25/814b lim: 50 exec/s: 40 rss: 75Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:34.376 [2024-11-26 20:09:47.064345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.376 [2024-11-26 20:09:47.064371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.376 [2024-11-26 20:09:47.064443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.376 [2024-11-26 20:09:47.064459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.376 [2024-11-26 20:09:47.064516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.376 [2024-11-26 20:09:47.064536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.376 [2024-11-26 20:09:47.064592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.376 [2024-11-26 20:09:47.064613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.376 [2024-11-26 20:09:47.064680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:34.376 [2024-11-26 20:09:47.064695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:34.376 #41 NEW cov: 12546 ft: 15005 corp: 26/864b lim: 50 exec/s: 41 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:34.376 [2024-11-26 20:09:47.103995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.376 [2024-11-26 20:09:47.104023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.376 [2024-11-26 20:09:47.104089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.376 [2024-11-26 20:09:47.104105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.376 #42 NEW cov: 12546 ft: 15016 corp: 27/886b lim: 50 exec/s: 42 rss: 75Mb L: 22/50 MS: 1 EraseBytes- 00:07:34.376 [2024-11-26 20:09:47.164306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.377 [2024-11-26 20:09:47.164332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.377 [2024-11-26 20:09:47.164381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.377 [2024-11-26 20:09:47.164398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.377 [2024-11-26 20:09:47.164453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.377 [2024-11-26 20:09:47.164469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.377 #43 NEW cov: 12546 ft: 15022 corp: 28/919b lim: 50 exec/s: 43 rss: 75Mb L: 33/50 MS: 1 ChangeByte- 00:07:34.377 [2024-11-26 20:09:47.204265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.377 [2024-11-26 20:09:47.204293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.377 [2024-11-26 20:09:47.204333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.377 [2024-11-26 20:09:47.204350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.377 #44 NEW cov: 12546 ft: 15098 corp: 29/944b lim: 50 exec/s: 44 rss: 75Mb L: 25/50 MS: 1 EraseBytes- 00:07:34.377 [2024-11-26 20:09:47.244404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.377 [2024-11-26 20:09:47.244430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.377 [2024-11-26 20:09:47.244485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.377 [2024-11-26 20:09:47.244502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.377 #45 NEW cov: 12546 ft: 15135 corp: 30/965b lim: 50 exec/s: 45 rss: 75Mb L: 21/50 MS: 1 ChangeBinInt- 00:07:34.377 [2024-11-26 20:09:47.304917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.377 [2024-11-26 20:09:47.304947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.377 [2024-11-26 20:09:47.304996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.377 [2024-11-26 20:09:47.305012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.377 [2024-11-26 20:09:47.305080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.377 [2024-11-26 20:09:47.305095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.377 [2024-11-26 20:09:47.305151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.377 [2024-11-26 20:09:47.305167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.637 #46 NEW cov: 12546 ft: 15140 corp: 31/1010b lim: 50 exec/s: 46 rss: 75Mb L: 45/50 MS: 1 CrossOver- 00:07:34.637 [2024-11-26 20:09:47.344806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.637 [2024-11-26 20:09:47.344832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.344896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.637 [2024-11-26 20:09:47.344913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.344971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.637 [2024-11-26 20:09:47.344987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.637 #47 NEW cov: 12546 ft: 15162 corp: 32/1043b lim: 50 exec/s: 47 rss: 75Mb L: 33/50 MS: 1 CrossOver- 00:07:34.637 [2024-11-26 20:09:47.385116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.637 [2024-11-26 20:09:47.385142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.385197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.637 [2024-11-26 20:09:47.385213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.385268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.637 [2024-11-26 20:09:47.385283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.385340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.637 [2024-11-26 20:09:47.385356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.637 #48 NEW cov: 12546 ft: 15194 corp: 33/1085b lim: 50 exec/s: 48 rss: 75Mb L: 42/50 MS: 1 CrossOver- 00:07:34.637 [2024-11-26 20:09:47.425053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.637 [2024-11-26 20:09:47.425079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.425124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.637 [2024-11-26 20:09:47.425140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.425197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.637 [2024-11-26 20:09:47.425216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.637 #49 NEW cov: 12546 ft: 15220 corp: 34/1118b lim: 50 exec/s: 49 rss: 75Mb L: 33/50 MS: 1 ShuffleBytes- 00:07:34.637 [2024-11-26 20:09:47.485246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.637 [2024-11-26 20:09:47.485272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.485335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.637 [2024-11-26 20:09:47.485351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.485405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.637 [2024-11-26 20:09:47.485421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.637 #50 NEW cov: 12546 ft: 15249 corp: 35/1155b lim: 50 exec/s: 50 rss: 75Mb L: 37/50 MS: 1 EraseBytes- 00:07:34.637 [2024-11-26 20:09:47.525658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.637 [2024-11-26 20:09:47.525684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.525744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.637 [2024-11-26 20:09:47.525759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.525816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.637 [2024-11-26 20:09:47.525830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.525887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.637 [2024-11-26 20:09:47.525902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.637 [2024-11-26 20:09:47.525957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:34.637 [2024-11-26 20:09:47.525973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:34.637 #51 NEW cov: 12546 ft: 15268 corp: 36/1205b lim: 50 exec/s: 25 rss: 75Mb L: 50/50 MS: 1 ChangeASCIIInt- 00:07:34.637 #51 DONE cov: 12546 ft: 15268 corp: 36/1205b lim: 50 exec/s: 25 rss: 75Mb 00:07:34.637 ###### Recommended dictionary. ###### 00:07:34.637 "\002\000\000\000\000\000\000\000" # Uses: 0 00:07:34.637 ###### End of recommended dictionary. ###### 00:07:34.637 Done 51 runs in 2 second(s) 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.897 20:09:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:34.897 [2024-11-26 20:09:47.690723] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:34.897 [2024-11-26 20:09:47.690795] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616750 ] 00:07:35.157 [2024-11-26 20:09:47.879438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.157 [2024-11-26 20:09:47.916955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.157 [2024-11-26 20:09:47.976364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.157 [2024-11-26 20:09:47.992746] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:35.157 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.157 INFO: Seed: 3607563354 00:07:35.157 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:35.157 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:35.157 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:35.157 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.157 #2 INITED exec/s: 0 rss: 65Mb 00:07:35.157 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.157 This may also happen if the target rejected all inputs we tried so far 00:07:35.157 [2024-11-26 20:09:48.059216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.157 [2024-11-26 20:09:48.059256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.157 [2024-11-26 20:09:48.059380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.157 [2024-11-26 20:09:48.059404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.157 [2024-11-26 20:09:48.059521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.157 [2024-11-26 20:09:48.059541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.676 NEW_FUNC[1/717]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:35.676 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.676 #4 NEW cov: 12340 ft: 12315 corp: 2/53b lim: 85 exec/s: 0 rss: 73Mb L: 52/52 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:35.676 [2024-11-26 20:09:48.409978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.676 [2024-11-26 20:09:48.410019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.676 [2024-11-26 20:09:48.410128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.676 [2024-11-26 20:09:48.410151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.676 [2024-11-26 20:09:48.410268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.676 [2024-11-26 20:09:48.410292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.676 #5 NEW cov: 12457 ft: 12906 corp: 3/105b lim: 85 exec/s: 0 rss: 73Mb L: 52/52 MS: 1 ChangeBinInt- 00:07:35.676 [2024-11-26 20:09:48.470245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.676 [2024-11-26 20:09:48.470277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.676 [2024-11-26 20:09:48.470352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.676 [2024-11-26 20:09:48.470371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.676 [2024-11-26 20:09:48.470487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.676 [2024-11-26 20:09:48.470505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.676 NEW_FUNC[1/1]: 0x1083718 in _sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1347 00:07:35.676 #6 NEW cov: 12464 ft: 13188 corp: 4/157b lim: 85 exec/s: 0 rss: 73Mb L: 52/52 MS: 1 ChangeBinInt- 00:07:35.676 [2024-11-26 20:09:48.530248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.676 [2024-11-26 20:09:48.530279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.676 [2024-11-26 20:09:48.530370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.676 [2024-11-26 20:09:48.530392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.676 [2024-11-26 20:09:48.530512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.676 [2024-11-26 20:09:48.530531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.676 #7 NEW cov: 12549 ft: 13451 corp: 5/209b lim: 85 exec/s: 0 rss: 73Mb L: 52/52 MS: 1 ChangeBit- 00:07:35.676 [2024-11-26 20:09:48.569913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.676 [2024-11-26 20:09:48.569939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.676 #11 NEW cov: 12549 ft: 14361 corp: 6/241b lim: 85 exec/s: 0 rss: 73Mb L: 32/52 MS: 4 ChangeByte-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:35.936 [2024-11-26 20:09:48.610544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.936 [2024-11-26 20:09:48.610578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.610637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.936 [2024-11-26 20:09:48.610656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.610785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.936 [2024-11-26 20:09:48.610807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.936 #12 NEW cov: 12549 ft: 14487 corp: 7/293b lim: 85 exec/s: 0 rss: 73Mb L: 52/52 MS: 1 ChangeBinInt- 00:07:35.936 [2024-11-26 20:09:48.650541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.936 [2024-11-26 20:09:48.650573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.650636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.936 [2024-11-26 20:09:48.650661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.650776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.936 [2024-11-26 20:09:48.650795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.936 #13 NEW cov: 12549 ft: 14551 corp: 8/345b lim: 85 exec/s: 0 rss: 73Mb L: 52/52 MS: 1 ChangeByte- 00:07:35.936 [2024-11-26 20:09:48.700244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.936 [2024-11-26 20:09:48.700271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.936 #14 NEW cov: 12549 ft: 14601 corp: 9/377b lim: 85 exec/s: 0 rss: 73Mb L: 32/52 MS: 1 CopyPart- 00:07:35.936 [2024-11-26 20:09:48.760958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.936 [2024-11-26 20:09:48.760989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.761109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.936 [2024-11-26 20:09:48.761132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.761249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.936 [2024-11-26 20:09:48.761270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.936 #15 NEW cov: 12549 ft: 14653 corp: 10/429b lim: 85 exec/s: 0 rss: 74Mb L: 52/52 MS: 1 ChangeBit- 00:07:35.936 [2024-11-26 20:09:48.831070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.936 [2024-11-26 20:09:48.831103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.831195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.936 [2024-11-26 20:09:48.831218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.936 [2024-11-26 20:09:48.831337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.936 [2024-11-26 20:09:48.831357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.196 #16 NEW cov: 12549 ft: 14699 corp: 11/481b lim: 85 exec/s: 0 rss: 74Mb L: 52/52 MS: 1 ChangeByte- 00:07:36.196 [2024-11-26 20:09:48.891269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.196 [2024-11-26 20:09:48.891302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.196 [2024-11-26 20:09:48.891416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.196 [2024-11-26 20:09:48.891441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.196 [2024-11-26 20:09:48.891561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.196 [2024-11-26 20:09:48.891583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.196 #17 NEW cov: 12549 ft: 14742 corp: 12/533b lim: 85 exec/s: 0 rss: 74Mb L: 52/52 MS: 1 ChangeBinInt- 00:07:36.196 [2024-11-26 20:09:48.930941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.196 [2024-11-26 20:09:48.930971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.196 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:36.196 #18 NEW cov: 12572 ft: 14800 corp: 13/565b lim: 85 exec/s: 0 rss: 74Mb L: 32/52 MS: 1 CopyPart- 00:07:36.196 [2024-11-26 20:09:48.991040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.196 [2024-11-26 20:09:48.991071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.196 #19 NEW cov: 12572 ft: 14867 corp: 14/591b lim: 85 exec/s: 19 rss: 74Mb L: 26/52 MS: 1 EraseBytes- 00:07:36.196 [2024-11-26 20:09:49.051692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.196 [2024-11-26 20:09:49.051724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.196 [2024-11-26 20:09:49.051825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.196 [2024-11-26 20:09:49.051850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.196 [2024-11-26 20:09:49.051961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.196 [2024-11-26 20:09:49.051982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.196 #20 NEW cov: 12572 ft: 14887 corp: 15/643b lim: 85 exec/s: 20 rss: 74Mb L: 52/52 MS: 1 ChangeByte- 00:07:36.196 [2024-11-26 20:09:49.112138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.196 [2024-11-26 20:09:49.112166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.196 [2024-11-26 20:09:49.112256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.196 [2024-11-26 20:09:49.112275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.196 [2024-11-26 20:09:49.112392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.196 [2024-11-26 20:09:49.112414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.196 [2024-11-26 20:09:49.112535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:36.196 [2024-11-26 20:09:49.112558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.456 #21 NEW cov: 12572 ft: 15237 corp: 16/727b lim: 85 exec/s: 21 rss: 74Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:07:36.456 [2024-11-26 20:09:49.152015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.456 [2024-11-26 20:09:49.152043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.152140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.456 [2024-11-26 20:09:49.152162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.152284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.456 [2024-11-26 20:09:49.152308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.456 #22 NEW cov: 12572 ft: 15288 corp: 17/780b lim: 85 exec/s: 22 rss: 74Mb L: 53/84 MS: 1 InsertByte- 00:07:36.456 [2024-11-26 20:09:49.212117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.456 [2024-11-26 20:09:49.212152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.212268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.456 [2024-11-26 20:09:49.212287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.212410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.456 [2024-11-26 20:09:49.212431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.456 #23 NEW cov: 12572 ft: 15341 corp: 18/832b lim: 85 exec/s: 23 rss: 74Mb L: 52/84 MS: 1 ChangeByte- 00:07:36.456 [2024-11-26 20:09:49.252230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.456 [2024-11-26 20:09:49.252263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.252352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.456 [2024-11-26 20:09:49.252374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.252497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.456 [2024-11-26 20:09:49.252519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.456 #24 NEW cov: 12572 ft: 15368 corp: 19/888b lim: 85 exec/s: 24 rss: 74Mb L: 56/84 MS: 1 CMP- DE: "\016\000\000\000"- 00:07:36.456 [2024-11-26 20:09:49.302433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.456 [2024-11-26 20:09:49.302467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.302593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.456 [2024-11-26 20:09:49.302626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.456 [2024-11-26 20:09:49.302749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.456 [2024-11-26 20:09:49.302775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.457 #25 NEW cov: 12572 ft: 15417 corp: 20/944b lim: 85 exec/s: 25 rss: 74Mb L: 56/84 MS: 1 ShuffleBytes- 00:07:36.457 [2024-11-26 20:09:49.372081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.457 [2024-11-26 20:09:49.372110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.716 #26 NEW cov: 12572 ft: 15502 corp: 21/977b lim: 85 exec/s: 26 rss: 74Mb L: 33/84 MS: 1 InsertByte- 00:07:36.716 [2024-11-26 20:09:49.412483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.716 [2024-11-26 20:09:49.412515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.716 [2024-11-26 20:09:49.412644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.716 [2024-11-26 20:09:49.412667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.716 #27 NEW cov: 12572 ft: 15807 corp: 22/1025b lim: 85 exec/s: 27 rss: 74Mb L: 48/84 MS: 1 CopyPart- 00:07:36.716 [2024-11-26 20:09:49.482953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.716 [2024-11-26 20:09:49.482986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.716 [2024-11-26 20:09:49.483086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.716 [2024-11-26 20:09:49.483107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.716 [2024-11-26 20:09:49.483233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.716 [2024-11-26 20:09:49.483256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.716 #28 NEW cov: 12572 ft: 15819 corp: 23/1077b lim: 85 exec/s: 28 rss: 74Mb L: 52/84 MS: 1 ChangeByte- 00:07:36.716 [2024-11-26 20:09:49.532839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.716 [2024-11-26 20:09:49.532873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.716 [2024-11-26 20:09:49.532979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.716 [2024-11-26 20:09:49.533014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.716 #29 NEW cov: 12572 ft: 15846 corp: 24/1114b lim: 85 exec/s: 29 rss: 74Mb L: 37/84 MS: 1 EraseBytes- 00:07:36.716 [2024-11-26 20:09:49.572671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.716 [2024-11-26 20:09:49.572698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.716 #30 NEW cov: 12572 ft: 15877 corp: 25/1146b lim: 85 exec/s: 30 rss: 74Mb L: 32/84 MS: 1 ChangeBit- 00:07:36.716 [2024-11-26 20:09:49.612759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.716 [2024-11-26 20:09:49.612791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.716 #31 NEW cov: 12572 ft: 15886 corp: 26/1179b lim: 85 exec/s: 31 rss: 74Mb L: 33/84 MS: 1 ShuffleBytes- 00:07:36.976 [2024-11-26 20:09:49.663361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.976 [2024-11-26 20:09:49.663392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.663475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.976 [2024-11-26 20:09:49.663498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.663625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.976 [2024-11-26 20:09:49.663648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.976 #32 NEW cov: 12572 ft: 15900 corp: 27/1232b lim: 85 exec/s: 32 rss: 74Mb L: 53/84 MS: 1 InsertByte- 00:07:36.976 [2024-11-26 20:09:49.703546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.976 [2024-11-26 20:09:49.703578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.703671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.976 [2024-11-26 20:09:49.703694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.703815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.976 [2024-11-26 20:09:49.703838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.976 #33 NEW cov: 12572 ft: 15907 corp: 28/1288b lim: 85 exec/s: 33 rss: 74Mb L: 56/84 MS: 1 PersAutoDict- DE: "\016\000\000\000"- 00:07:36.976 [2024-11-26 20:09:49.763015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.976 [2024-11-26 20:09:49.763047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.976 #34 NEW cov: 12572 ft: 15929 corp: 29/1320b lim: 85 exec/s: 34 rss: 74Mb L: 32/84 MS: 1 ShuffleBytes- 00:07:36.976 [2024-11-26 20:09:49.804087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.976 [2024-11-26 20:09:49.804119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.804189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.976 [2024-11-26 20:09:49.804207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.804332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.976 [2024-11-26 20:09:49.804355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.804475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:36.976 [2024-11-26 20:09:49.804497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.976 #35 NEW cov: 12572 ft: 15951 corp: 30/1392b lim: 85 exec/s: 35 rss: 74Mb L: 72/84 MS: 1 InsertRepeatedBytes- 00:07:36.976 [2024-11-26 20:09:49.844121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.976 [2024-11-26 20:09:49.844149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.844224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.976 [2024-11-26 20:09:49.844248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.844360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.976 [2024-11-26 20:09:49.844381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.976 [2024-11-26 20:09:49.844498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:36.976 [2024-11-26 20:09:49.844519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.976 #36 NEW cov: 12572 ft: 15961 corp: 31/1461b lim: 85 exec/s: 36 rss: 75Mb L: 69/84 MS: 1 EraseBytes- 00:07:36.977 [2024-11-26 20:09:49.903674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.977 [2024-11-26 20:09:49.903700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.236 #37 NEW cov: 12572 ft: 15962 corp: 32/1493b lim: 85 exec/s: 37 rss: 75Mb L: 32/84 MS: 1 CrossOver- 00:07:37.236 [2024-11-26 20:09:49.964231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.236 [2024-11-26 20:09:49.964262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.236 [2024-11-26 20:09:49.964380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.236 [2024-11-26 20:09:49.964402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.236 [2024-11-26 20:09:49.964516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.236 [2024-11-26 20:09:49.964537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.236 #38 NEW cov: 12572 ft: 15993 corp: 33/1546b lim: 85 exec/s: 38 rss: 75Mb L: 53/84 MS: 1 InsertByte- 00:07:37.236 [2024-11-26 20:09:50.014429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.236 [2024-11-26 20:09:50.014459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.236 [2024-11-26 20:09:50.014530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.236 [2024-11-26 20:09:50.014551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.236 [2024-11-26 20:09:50.014668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.236 [2024-11-26 20:09:50.014692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.236 #39 NEW cov: 12572 ft: 16083 corp: 34/1604b lim: 85 exec/s: 19 rss: 75Mb L: 58/84 MS: 1 CMP- DE: "\000\003"- 00:07:37.236 #39 DONE cov: 12572 ft: 16083 corp: 34/1604b lim: 85 exec/s: 19 rss: 75Mb 00:07:37.236 ###### Recommended dictionary. ###### 00:07:37.236 "\016\000\000\000" # Uses: 1 00:07:37.236 "\000\003" # Uses: 0 00:07:37.236 ###### End of recommended dictionary. ###### 00:07:37.236 Done 39 runs in 2 second(s) 00:07:37.236 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.236 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.236 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.236 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:37.236 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:37.236 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:37.237 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:37.496 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:37.496 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.496 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.496 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.496 20:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:37.496 [2024-11-26 20:09:50.201081] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:37.496 [2024-11-26 20:09:50.201156] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617276 ] 00:07:37.496 [2024-11-26 20:09:50.400362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.756 [2024-11-26 20:09:50.435509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.756 [2024-11-26 20:09:50.494974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.756 [2024-11-26 20:09:50.511321] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:37.756 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.756 INFO: Seed: 1831604922 00:07:37.756 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:37.756 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:37.756 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:37.756 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.756 #2 INITED exec/s: 0 rss: 65Mb 00:07:37.756 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:37.756 This may also happen if the target rejected all inputs we tried so far 00:07:37.756 [2024-11-26 20:09:50.566893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.756 [2024-11-26 20:09:50.566923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.756 [2024-11-26 20:09:50.566970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:37.756 [2024-11-26 20:09:50.566986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.756 [2024-11-26 20:09:50.567041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:37.756 [2024-11-26 20:09:50.567056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.756 [2024-11-26 20:09:50.567114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:37.756 [2024-11-26 20:09:50.567128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.015 NEW_FUNC[1/717]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:38.015 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.015 #18 NEW cov: 12278 ft: 12277 corp: 2/21b lim: 25 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:38.015 [2024-11-26 20:09:50.907927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.015 [2024-11-26 20:09:50.907986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.015 [2024-11-26 20:09:50.908072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.015 [2024-11-26 20:09:50.908101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.015 [2024-11-26 20:09:50.908181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.015 [2024-11-26 20:09:50.908209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.015 [2024-11-26 20:09:50.908289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.015 [2024-11-26 20:09:50.908317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.276 #19 NEW cov: 12391 ft: 12977 corp: 3/42b lim: 25 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 CrossOver- 00:07:38.276 [2024-11-26 20:09:50.977691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.276 [2024-11-26 20:09:50.977719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:50.977780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.276 [2024-11-26 20:09:50.977795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:50.977849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.276 [2024-11-26 20:09:50.977863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.276 #20 NEW cov: 12397 ft: 13613 corp: 4/61b lim: 25 exec/s: 0 rss: 74Mb L: 19/21 MS: 1 EraseBytes- 00:07:38.276 [2024-11-26 20:09:51.037955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.276 [2024-11-26 20:09:51.037983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.038032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.276 [2024-11-26 20:09:51.038048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.038100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.276 [2024-11-26 20:09:51.038116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.038171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.276 [2024-11-26 20:09:51.038187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.276 #21 NEW cov: 12482 ft: 13909 corp: 5/81b lim: 25 exec/s: 0 rss: 74Mb L: 20/21 MS: 1 ChangeByte- 00:07:38.276 [2024-11-26 20:09:51.078018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.276 [2024-11-26 20:09:51.078045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.078094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.276 [2024-11-26 20:09:51.078110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.078164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.276 [2024-11-26 20:09:51.078179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.078236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.276 [2024-11-26 20:09:51.078251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.276 #22 NEW cov: 12482 ft: 14085 corp: 6/101b lim: 25 exec/s: 0 rss: 74Mb L: 20/21 MS: 1 ShuffleBytes- 00:07:38.276 [2024-11-26 20:09:51.138131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.276 [2024-11-26 20:09:51.138158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.138203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.276 [2024-11-26 20:09:51.138219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.138272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.276 [2024-11-26 20:09:51.138289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.276 #23 NEW cov: 12482 ft: 14157 corp: 7/120b lim: 25 exec/s: 0 rss: 74Mb L: 19/21 MS: 1 ChangeBinInt- 00:07:38.276 [2024-11-26 20:09:51.198436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.276 [2024-11-26 20:09:51.198464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.198512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.276 [2024-11-26 20:09:51.198529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.198579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.276 [2024-11-26 20:09:51.198615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.276 [2024-11-26 20:09:51.198673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.276 [2024-11-26 20:09:51.198689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.536 #24 NEW cov: 12482 ft: 14197 corp: 8/141b lim: 25 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 CrossOver- 00:07:38.536 [2024-11-26 20:09:51.238366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.536 [2024-11-26 20:09:51.238393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.238440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.536 [2024-11-26 20:09:51.238455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.238508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.536 [2024-11-26 20:09:51.238523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.536 #25 NEW cov: 12482 ft: 14295 corp: 9/160b lim: 25 exec/s: 0 rss: 74Mb L: 19/21 MS: 1 ChangeByte- 00:07:38.536 [2024-11-26 20:09:51.278596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.536 [2024-11-26 20:09:51.278628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.278698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.536 [2024-11-26 20:09:51.278719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.278773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.536 [2024-11-26 20:09:51.278789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.278841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.536 [2024-11-26 20:09:51.278857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.536 #26 NEW cov: 12482 ft: 14348 corp: 10/183b lim: 25 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 CopyPart- 00:07:38.536 [2024-11-26 20:09:51.318608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.536 [2024-11-26 20:09:51.318637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.318702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.536 [2024-11-26 20:09:51.318717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.318771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.536 [2024-11-26 20:09:51.318786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.536 #27 NEW cov: 12482 ft: 14454 corp: 11/198b lim: 25 exec/s: 0 rss: 74Mb L: 15/23 MS: 1 EraseBytes- 00:07:38.536 [2024-11-26 20:09:51.358840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.536 [2024-11-26 20:09:51.358868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.358932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.536 [2024-11-26 20:09:51.358948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.359001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.536 [2024-11-26 20:09:51.359016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.359070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.536 [2024-11-26 20:09:51.359086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.536 #28 NEW cov: 12482 ft: 14474 corp: 12/220b lim: 25 exec/s: 0 rss: 74Mb L: 22/23 MS: 1 InsertByte- 00:07:38.536 [2024-11-26 20:09:51.419026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.536 [2024-11-26 20:09:51.419054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.419119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.536 [2024-11-26 20:09:51.419136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.419192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.536 [2024-11-26 20:09:51.419206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.419265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.536 [2024-11-26 20:09:51.419281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.536 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:38.536 #29 NEW cov: 12505 ft: 14488 corp: 13/240b lim: 25 exec/s: 0 rss: 74Mb L: 20/23 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:38.536 [2024-11-26 20:09:51.459239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.536 [2024-11-26 20:09:51.459266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.459320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.536 [2024-11-26 20:09:51.459335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.459403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.536 [2024-11-26 20:09:51.459418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.536 [2024-11-26 20:09:51.459472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.536 [2024-11-26 20:09:51.459487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.796 #30 NEW cov: 12514 ft: 14555 corp: 14/260b lim: 25 exec/s: 0 rss: 74Mb L: 20/23 MS: 1 CrossOver- 00:07:38.796 [2024-11-26 20:09:51.499143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.796 [2024-11-26 20:09:51.499171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.499209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.796 [2024-11-26 20:09:51.499223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.499275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.796 [2024-11-26 20:09:51.499289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.796 #31 NEW cov: 12514 ft: 14624 corp: 15/275b lim: 25 exec/s: 0 rss: 74Mb L: 15/23 MS: 1 CrossOver- 00:07:38.796 [2024-11-26 20:09:51.559421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.796 [2024-11-26 20:09:51.559449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.559518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.796 [2024-11-26 20:09:51.559533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.559587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.796 [2024-11-26 20:09:51.559605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.559661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.796 [2024-11-26 20:09:51.559688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.796 #32 NEW cov: 12514 ft: 14651 corp: 16/296b lim: 25 exec/s: 32 rss: 74Mb L: 21/23 MS: 1 CrossOver- 00:07:38.796 [2024-11-26 20:09:51.599420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.796 [2024-11-26 20:09:51.599452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.599505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.796 [2024-11-26 20:09:51.599521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.599579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.796 [2024-11-26 20:09:51.599595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.796 #33 NEW cov: 12514 ft: 14688 corp: 17/311b lim: 25 exec/s: 33 rss: 74Mb L: 15/23 MS: 1 ShuffleBytes- 00:07:38.796 [2024-11-26 20:09:51.639677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.796 [2024-11-26 20:09:51.639705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.639785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.796 [2024-11-26 20:09:51.639801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.639853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.796 [2024-11-26 20:09:51.639868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.639921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:38.796 [2024-11-26 20:09:51.639937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.796 #34 NEW cov: 12514 ft: 14698 corp: 18/331b lim: 25 exec/s: 34 rss: 74Mb L: 20/23 MS: 1 ChangeBinInt- 00:07:38.796 [2024-11-26 20:09:51.699732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:38.796 [2024-11-26 20:09:51.699758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.699822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:38.796 [2024-11-26 20:09:51.699838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.796 [2024-11-26 20:09:51.699892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:38.796 [2024-11-26 20:09:51.699908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.796 #35 NEW cov: 12514 ft: 14722 corp: 19/346b lim: 25 exec/s: 35 rss: 74Mb L: 15/23 MS: 1 ShuffleBytes- 00:07:39.055 [2024-11-26 20:09:51.739928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.055 [2024-11-26 20:09:51.739954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.055 [2024-11-26 20:09:51.740025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.055 [2024-11-26 20:09:51.740039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.055 [2024-11-26 20:09:51.740092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.055 [2024-11-26 20:09:51.740106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.740163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.056 [2024-11-26 20:09:51.740179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.056 #36 NEW cov: 12514 ft: 14741 corp: 20/370b lim: 25 exec/s: 36 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:39.056 [2024-11-26 20:09:51.780102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.056 [2024-11-26 20:09:51.780130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.780205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.056 [2024-11-26 20:09:51.780221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.780273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.056 [2024-11-26 20:09:51.780289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.780343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.056 [2024-11-26 20:09:51.780358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.056 #37 NEW cov: 12514 ft: 14778 corp: 21/394b lim: 25 exec/s: 37 rss: 74Mb L: 24/24 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:39.056 [2024-11-26 20:09:51.819988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.056 [2024-11-26 20:09:51.820014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.820049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.056 [2024-11-26 20:09:51.820064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.056 #38 NEW cov: 12514 ft: 15028 corp: 22/407b lim: 25 exec/s: 38 rss: 74Mb L: 13/24 MS: 1 EraseBytes- 00:07:39.056 [2024-11-26 20:09:51.880317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.056 [2024-11-26 20:09:51.880344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.880398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.056 [2024-11-26 20:09:51.880412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.880467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.056 [2024-11-26 20:09:51.880482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.880534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.056 [2024-11-26 20:09:51.880549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.056 #39 NEW cov: 12514 ft: 15034 corp: 23/429b lim: 25 exec/s: 39 rss: 74Mb L: 22/24 MS: 1 ShuffleBytes- 00:07:39.056 [2024-11-26 20:09:51.940630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.056 [2024-11-26 20:09:51.940657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.940727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.056 [2024-11-26 20:09:51.940756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.940812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.056 [2024-11-26 20:09:51.940826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.940878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.056 [2024-11-26 20:09:51.940893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.940946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:39.056 [2024-11-26 20:09:51.940961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:39.056 #40 NEW cov: 12514 ft: 15065 corp: 24/454b lim: 25 exec/s: 40 rss: 74Mb L: 25/25 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:39.056 [2024-11-26 20:09:51.980615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.056 [2024-11-26 20:09:51.980643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.980699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.056 [2024-11-26 20:09:51.980715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.980770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.056 [2024-11-26 20:09:51.980784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.056 [2024-11-26 20:09:51.980840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.056 [2024-11-26 20:09:51.980856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.316 #41 NEW cov: 12514 ft: 15096 corp: 25/476b lim: 25 exec/s: 41 rss: 75Mb L: 22/25 MS: 1 CrossOver- 00:07:39.316 [2024-11-26 20:09:52.020730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.316 [2024-11-26 20:09:52.020757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.020822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.316 [2024-11-26 20:09:52.020838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.020892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.316 [2024-11-26 20:09:52.020908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.020962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.316 [2024-11-26 20:09:52.020977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.316 #42 NEW cov: 12514 ft: 15108 corp: 26/496b lim: 25 exec/s: 42 rss: 75Mb L: 20/25 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:39.316 [2024-11-26 20:09:52.080810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.316 [2024-11-26 20:09:52.080837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.080901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.316 [2024-11-26 20:09:52.080921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.080978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.316 [2024-11-26 20:09:52.080993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.316 #43 NEW cov: 12514 ft: 15118 corp: 27/515b lim: 25 exec/s: 43 rss: 75Mb L: 19/25 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:39.316 [2024-11-26 20:09:52.121005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.316 [2024-11-26 20:09:52.121032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.121102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.316 [2024-11-26 20:09:52.121119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.121170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.316 [2024-11-26 20:09:52.121185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.121238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.316 [2024-11-26 20:09:52.121254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.316 #44 NEW cov: 12514 ft: 15120 corp: 28/535b lim: 25 exec/s: 44 rss: 75Mb L: 20/25 MS: 1 ChangeByte- 00:07:39.316 [2024-11-26 20:09:52.180919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.316 [2024-11-26 20:09:52.180946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.180997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.316 [2024-11-26 20:09:52.181012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.316 #45 NEW cov: 12514 ft: 15170 corp: 29/549b lim: 25 exec/s: 45 rss: 75Mb L: 14/25 MS: 1 CrossOver- 00:07:39.316 [2024-11-26 20:09:52.221174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.316 [2024-11-26 20:09:52.221202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.221238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.316 [2024-11-26 20:09:52.221252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.316 [2024-11-26 20:09:52.221309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.316 [2024-11-26 20:09:52.221325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.316 #46 NEW cov: 12514 ft: 15193 corp: 30/564b lim: 25 exec/s: 46 rss: 75Mb L: 15/25 MS: 1 ChangeBinInt- 00:07:39.575 [2024-11-26 20:09:52.261144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.575 [2024-11-26 20:09:52.261171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.575 [2024-11-26 20:09:52.261207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.575 [2024-11-26 20:09:52.261222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.575 #47 NEW cov: 12514 ft: 15199 corp: 31/577b lim: 25 exec/s: 47 rss: 75Mb L: 13/25 MS: 1 EraseBytes- 00:07:39.575 [2024-11-26 20:09:52.321354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.575 [2024-11-26 20:09:52.321381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.575 [2024-11-26 20:09:52.321419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.575 [2024-11-26 20:09:52.321435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.575 #48 NEW cov: 12514 ft: 15256 corp: 32/590b lim: 25 exec/s: 48 rss: 75Mb L: 13/25 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:39.575 [2024-11-26 20:09:52.381610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.575 [2024-11-26 20:09:52.381638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.575 [2024-11-26 20:09:52.381707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.575 [2024-11-26 20:09:52.381723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.575 [2024-11-26 20:09:52.381787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.575 [2024-11-26 20:09:52.381802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.576 #49 NEW cov: 12514 ft: 15284 corp: 33/605b lim: 25 exec/s: 49 rss: 75Mb L: 15/25 MS: 1 ChangeBinInt- 00:07:39.576 [2024-11-26 20:09:52.441671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.576 [2024-11-26 20:09:52.441699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.576 [2024-11-26 20:09:52.441737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.576 [2024-11-26 20:09:52.441753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.576 #50 NEW cov: 12514 ft: 15323 corp: 34/618b lim: 25 exec/s: 50 rss: 75Mb L: 13/25 MS: 1 EraseBytes- 00:07:39.576 [2024-11-26 20:09:52.502039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.576 [2024-11-26 20:09:52.502068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.576 [2024-11-26 20:09:52.502116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.576 [2024-11-26 20:09:52.502133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.576 [2024-11-26 20:09:52.502189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.576 [2024-11-26 20:09:52.502203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.576 [2024-11-26 20:09:52.502259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.576 [2024-11-26 20:09:52.502275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.835 #51 NEW cov: 12514 ft: 15345 corp: 35/640b lim: 25 exec/s: 51 rss: 75Mb L: 22/25 MS: 1 ChangeBinInt- 00:07:39.835 [2024-11-26 20:09:52.562167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.835 [2024-11-26 20:09:52.562195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.835 [2024-11-26 20:09:52.562260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.835 [2024-11-26 20:09:52.562276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.835 [2024-11-26 20:09:52.562334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.835 [2024-11-26 20:09:52.562350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.835 [2024-11-26 20:09:52.562405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.835 [2024-11-26 20:09:52.562420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.835 #52 NEW cov: 12514 ft: 15370 corp: 36/662b lim: 25 exec/s: 26 rss: 75Mb L: 22/25 MS: 1 ChangeByte- 00:07:39.835 #52 DONE cov: 12514 ft: 15370 corp: 36/662b lim: 25 exec/s: 26 rss: 75Mb 00:07:39.835 ###### Recommended dictionary. ###### 00:07:39.835 "\000\000\000\000" # Uses: 4 00:07:39.835 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:39.835 ###### End of recommended dictionary. ###### 00:07:39.835 Done 52 runs in 2 second(s) 00:07:39.835 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.835 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.835 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.835 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.836 20:09:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:39.836 [2024-11-26 20:09:52.747915] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:39.836 [2024-11-26 20:09:52.747986] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617717 ] 00:07:40.095 [2024-11-26 20:09:52.941579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.095 [2024-11-26 20:09:52.978905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.354 [2024-11-26 20:09:53.037895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.354 [2024-11-26 20:09:53.054243] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:40.354 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.354 INFO: Seed: 79650519 00:07:40.354 INFO: Loaded 1 modules (389518 inline 8-bit counters): 389518 [0x2c6a00c, 0x2cc919a), 00:07:40.354 INFO: Loaded 1 PC tables (389518 PCs): 389518 [0x2cc91a0,0x32baa80), 00:07:40.354 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:40.354 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.354 #2 INITED exec/s: 0 rss: 65Mb 00:07:40.354 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.354 This may also happen if the target rejected all inputs we tried so far 00:07:40.354 [2024-11-26 20:09:53.098992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.354 [2024-11-26 20:09:53.099028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.614 NEW_FUNC[1/718]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:40.614 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:40.614 #3 NEW cov: 12350 ft: 12330 corp: 2/34b lim: 100 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:40.614 [2024-11-26 20:09:53.449812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.614 [2024-11-26 20:09:53.449849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.614 #4 NEW cov: 12463 ft: 12939 corp: 3/67b lim: 100 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:07:40.614 [2024-11-26 20:09:53.539996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.614 [2024-11-26 20:09:53.540028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.873 #15 NEW cov: 12469 ft: 13216 corp: 4/100b lim: 100 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:07:40.873 [2024-11-26 20:09:53.590077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.873 [2024-11-26 20:09:53.590109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.873 #16 NEW cov: 12554 ft: 13464 corp: 5/133b lim: 100 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:07:40.873 [2024-11-26 20:09:53.680350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.873 [2024-11-26 20:09:53.680379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.873 [2024-11-26 20:09:53.680428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.873 [2024-11-26 20:09:53.680446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.873 #22 NEW cov: 12554 ft: 14475 corp: 6/176b lim: 100 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 CrossOver- 00:07:40.873 [2024-11-26 20:09:53.770572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.873 [2024-11-26 20:09:53.770608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.132 #23 NEW cov: 12554 ft: 14519 corp: 7/212b lim: 100 exec/s: 0 rss: 73Mb L: 36/43 MS: 1 CopyPart- 00:07:41.132 [2024-11-26 20:09:53.830679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:53.830710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.132 #28 NEW cov: 12554 ft: 14544 corp: 8/250b lim: 100 exec/s: 0 rss: 73Mb L: 38/43 MS: 5 ShuffleBytes-ChangeByte-InsertByte-InsertByte-InsertRepeatedBytes- 00:07:41.132 [2024-11-26 20:09:53.890972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15408456814510331349 len:54742 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:53.891003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.132 [2024-11-26 20:09:53.891039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15408456814510331349 len:54742 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:53.891058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.132 #31 NEW cov: 12554 ft: 14652 corp: 9/308b lim: 100 exec/s: 0 rss: 73Mb L: 58/58 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:41.132 [2024-11-26 20:09:53.951050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:53.951080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.132 [2024-11-26 20:09:53.951129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:53.951148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.132 NEW_FUNC[1/1]: 0x1c46778 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:41.132 #32 NEW cov: 12577 ft: 14776 corp: 10/351b lim: 100 exec/s: 0 rss: 73Mb L: 43/58 MS: 1 CrossOver- 00:07:41.132 [2024-11-26 20:09:54.051505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:54.051536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.132 [2024-11-26 20:09:54.051570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9982943851654580874 len:35467 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:54.051587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.132 [2024-11-26 20:09:54.051625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9982943851654580874 len:35467 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:54.051642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.132 [2024-11-26 20:09:54.051671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9982943851654580874 len:35467 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.132 [2024-11-26 20:09:54.051688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.391 #33 NEW cov: 12577 ft: 15265 corp: 11/445b lim: 100 exec/s: 33 rss: 74Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:07:41.391 [2024-11-26 20:09:54.151527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.391 [2024-11-26 20:09:54.151557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.391 #34 NEW cov: 12577 ft: 15279 corp: 12/473b lim: 100 exec/s: 34 rss: 74Mb L: 28/94 MS: 1 EraseBytes- 00:07:41.391 [2024-11-26 20:09:54.241775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.391 [2024-11-26 20:09:54.241806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.391 #35 NEW cov: 12577 ft: 15319 corp: 13/494b lim: 100 exec/s: 35 rss: 74Mb L: 21/94 MS: 1 EraseBytes- 00:07:41.391 [2024-11-26 20:09:54.301921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.391 [2024-11-26 20:09:54.301952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.651 #36 NEW cov: 12577 ft: 15349 corp: 14/515b lim: 100 exec/s: 36 rss: 74Mb L: 21/94 MS: 1 ChangeBinInt- 00:07:41.651 [2024-11-26 20:09:54.392307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069582356735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.651 [2024-11-26 20:09:54.392337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.651 [2024-11-26 20:09:54.392386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.651 [2024-11-26 20:09:54.392404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.651 [2024-11-26 20:09:54.392435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.651 [2024-11-26 20:09:54.392451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.651 #37 NEW cov: 12577 ft: 15733 corp: 15/587b lim: 100 exec/s: 37 rss: 74Mb L: 72/94 MS: 1 InsertRepeatedBytes- 00:07:41.651 [2024-11-26 20:09:54.452300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:44 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.651 [2024-11-26 20:09:54.452331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.651 #38 NEW cov: 12577 ft: 15824 corp: 16/608b lim: 100 exec/s: 38 rss: 74Mb L: 21/94 MS: 1 EraseBytes- 00:07:41.651 [2024-11-26 20:09:54.512533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.651 [2024-11-26 20:09:54.512564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.651 [2024-11-26 20:09:54.512607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.651 [2024-11-26 20:09:54.512627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.651 #39 NEW cov: 12577 ft: 15835 corp: 17/656b lim: 100 exec/s: 39 rss: 74Mb L: 48/94 MS: 1 InsertRepeatedBytes- 00:07:41.651 [2024-11-26 20:09:54.572604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:25937575936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.651 [2024-11-26 20:09:54.572634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.910 #40 NEW cov: 12577 ft: 15868 corp: 18/691b lim: 100 exec/s: 40 rss: 74Mb L: 35/94 MS: 1 CMP- DE: "\006\000"- 00:07:41.910 [2024-11-26 20:09:54.622752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.910 [2024-11-26 20:09:54.622783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.910 #41 NEW cov: 12577 ft: 15876 corp: 19/719b lim: 100 exec/s: 41 rss: 74Mb L: 28/94 MS: 1 ChangeByte- 00:07:41.910 [2024-11-26 20:09:54.713224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.910 [2024-11-26 20:09:54.713254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.910 [2024-11-26 20:09:54.713286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.713306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.911 [2024-11-26 20:09:54.713337] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.713354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.911 [2024-11-26 20:09:54.713383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.713400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.911 #42 NEW cov: 12577 ft: 15904 corp: 20/818b lim: 100 exec/s: 42 rss: 74Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:07:41.911 [2024-11-26 20:09:54.773371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.773401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.911 [2024-11-26 20:09:54.773448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.773466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.911 [2024-11-26 20:09:54.773498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.773514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.911 [2024-11-26 20:09:54.773543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.773559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.911 [2024-11-26 20:09:54.773588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.911 [2024-11-26 20:09:54.773610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.170 #43 NEW cov: 12577 ft: 15962 corp: 21/918b lim: 100 exec/s: 43 rss: 74Mb L: 100/100 MS: 1 CopyPart- 00:07:42.170 [2024-11-26 20:09:54.873650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:54.873682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.170 [2024-11-26 20:09:54.873715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446514275779346431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:54.873733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.170 [2024-11-26 20:09:54.873768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:54.873785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.170 [2024-11-26 20:09:54.873814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:54.873831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.170 #44 NEW cov: 12577 ft: 15979 corp: 22/1017b lim: 100 exec/s: 44 rss: 74Mb L: 99/100 MS: 1 ChangeByte- 00:07:42.170 [2024-11-26 20:09:54.933564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:54.933595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.170 #45 NEW cov: 12577 ft: 15980 corp: 23/1040b lim: 100 exec/s: 45 rss: 74Mb L: 23/100 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:42.170 [2024-11-26 20:09:54.994079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:54.994110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.170 [2024-11-26 20:09:54.994144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:54.994162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.170 #46 NEW cov: 12577 ft: 16007 corp: 24/1083b lim: 100 exec/s: 46 rss: 74Mb L: 43/100 MS: 1 ShuffleBytes- 00:07:42.170 [2024-11-26 20:09:55.054012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069582356735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:55.054042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.170 [2024-11-26 20:09:55.054091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446463698244468735 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:55.054109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.170 [2024-11-26 20:09:55.054140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7012352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.170 [2024-11-26 20:09:55.054156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.430 #47 NEW cov: 12577 ft: 16029 corp: 25/1158b lim: 100 exec/s: 23 rss: 74Mb L: 75/100 MS: 1 CopyPart- 00:07:42.430 #47 DONE cov: 12577 ft: 16029 corp: 25/1158b lim: 100 exec/s: 23 rss: 74Mb 00:07:42.430 ###### Recommended dictionary. ###### 00:07:42.430 "\006\000" # Uses: 1 00:07:42.430 ###### End of recommended dictionary. ###### 00:07:42.430 Done 47 runs in 2 second(s) 00:07:42.430 20:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.430 20:09:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.430 20:09:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.430 20:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:42.430 00:07:42.430 real 1m3.234s 00:07:42.430 user 1m39.615s 00:07:42.430 sys 0m7.313s 00:07:42.430 20:09:55 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.430 20:09:55 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:42.430 ************************************ 00:07:42.430 END TEST nvmf_llvm_fuzz 00:07:42.430 ************************************ 00:07:42.430 20:09:55 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:42.430 20:09:55 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:42.430 20:09:55 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:42.430 20:09:55 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.430 20:09:55 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.430 20:09:55 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:42.430 ************************************ 00:07:42.430 START TEST vfio_llvm_fuzz 00:07:42.430 ************************************ 00:07:42.430 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:42.691 * Looking for test storage... 00:07:42.691 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.691 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.691 --rc genhtml_branch_coverage=1 00:07:42.691 --rc genhtml_function_coverage=1 00:07:42.692 --rc genhtml_legend=1 00:07:42.692 --rc geninfo_all_blocks=1 00:07:42.692 --rc geninfo_unexecuted_blocks=1 00:07:42.692 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.692 ' 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.692 --rc genhtml_branch_coverage=1 00:07:42.692 --rc genhtml_function_coverage=1 00:07:42.692 --rc genhtml_legend=1 00:07:42.692 --rc geninfo_all_blocks=1 00:07:42.692 --rc geninfo_unexecuted_blocks=1 00:07:42.692 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.692 ' 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.692 --rc genhtml_branch_coverage=1 00:07:42.692 --rc genhtml_function_coverage=1 00:07:42.692 --rc genhtml_legend=1 00:07:42.692 --rc geninfo_all_blocks=1 00:07:42.692 --rc geninfo_unexecuted_blocks=1 00:07:42.692 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.692 ' 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.692 --rc genhtml_branch_coverage=1 00:07:42.692 --rc genhtml_function_coverage=1 00:07:42.692 --rc genhtml_legend=1 00:07:42.692 --rc geninfo_all_blocks=1 00:07:42.692 --rc geninfo_unexecuted_blocks=1 00:07:42.692 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.692 ' 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:42.692 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:42.693 #define SPDK_CONFIG_H 00:07:42.693 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:42.693 #define SPDK_CONFIG_APPS 1 00:07:42.693 #define SPDK_CONFIG_ARCH native 00:07:42.693 #undef SPDK_CONFIG_ASAN 00:07:42.693 #undef SPDK_CONFIG_AVAHI 00:07:42.693 #undef SPDK_CONFIG_CET 00:07:42.693 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:42.693 #define SPDK_CONFIG_COVERAGE 1 00:07:42.693 #define SPDK_CONFIG_CROSS_PREFIX 00:07:42.693 #undef SPDK_CONFIG_CRYPTO 00:07:42.693 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:42.693 #undef SPDK_CONFIG_CUSTOMOCF 00:07:42.693 #undef SPDK_CONFIG_DAOS 00:07:42.693 #define SPDK_CONFIG_DAOS_DIR 00:07:42.693 #define SPDK_CONFIG_DEBUG 1 00:07:42.693 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:42.693 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:42.693 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:42.693 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:42.693 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:42.693 #undef SPDK_CONFIG_DPDK_UADK 00:07:42.693 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:42.693 #define SPDK_CONFIG_EXAMPLES 1 00:07:42.693 #undef SPDK_CONFIG_FC 00:07:42.693 #define SPDK_CONFIG_FC_PATH 00:07:42.693 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:42.693 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:42.693 #define SPDK_CONFIG_FSDEV 1 00:07:42.693 #undef SPDK_CONFIG_FUSE 00:07:42.693 #define SPDK_CONFIG_FUZZER 1 00:07:42.693 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:42.693 #undef SPDK_CONFIG_GOLANG 00:07:42.693 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:42.693 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:42.693 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:42.693 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:42.693 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:42.693 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:42.693 #undef SPDK_CONFIG_HAVE_LZ4 00:07:42.693 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:42.693 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:42.693 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:42.693 #define SPDK_CONFIG_IDXD 1 00:07:42.693 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:42.693 #undef SPDK_CONFIG_IPSEC_MB 00:07:42.693 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:42.693 #define SPDK_CONFIG_ISAL 1 00:07:42.693 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:42.693 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:42.693 #define SPDK_CONFIG_LIBDIR 00:07:42.693 #undef SPDK_CONFIG_LTO 00:07:42.693 #define SPDK_CONFIG_MAX_LCORES 128 00:07:42.693 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:42.693 #define SPDK_CONFIG_NVME_CUSE 1 00:07:42.693 #undef SPDK_CONFIG_OCF 00:07:42.693 #define SPDK_CONFIG_OCF_PATH 00:07:42.693 #define SPDK_CONFIG_OPENSSL_PATH 00:07:42.693 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:42.693 #define SPDK_CONFIG_PGO_DIR 00:07:42.693 #undef SPDK_CONFIG_PGO_USE 00:07:42.693 #define SPDK_CONFIG_PREFIX /usr/local 00:07:42.693 #undef SPDK_CONFIG_RAID5F 00:07:42.693 #undef SPDK_CONFIG_RBD 00:07:42.693 #define SPDK_CONFIG_RDMA 1 00:07:42.693 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:42.693 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:42.693 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:42.693 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:42.693 #undef SPDK_CONFIG_SHARED 00:07:42.693 #undef SPDK_CONFIG_SMA 00:07:42.693 #define SPDK_CONFIG_TESTS 1 00:07:42.693 #undef SPDK_CONFIG_TSAN 00:07:42.693 #define SPDK_CONFIG_UBLK 1 00:07:42.693 #define SPDK_CONFIG_UBSAN 1 00:07:42.693 #undef SPDK_CONFIG_UNIT_TESTS 00:07:42.693 #undef SPDK_CONFIG_URING 00:07:42.693 #define SPDK_CONFIG_URING_PATH 00:07:42.693 #undef SPDK_CONFIG_URING_ZNS 00:07:42.693 #undef SPDK_CONFIG_USDT 00:07:42.693 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:42.693 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:42.693 #define SPDK_CONFIG_VFIO_USER 1 00:07:42.693 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:42.693 #define SPDK_CONFIG_VHOST 1 00:07:42.693 #define SPDK_CONFIG_VIRTIO 1 00:07:42.693 #undef SPDK_CONFIG_VTUNE 00:07:42.693 #define SPDK_CONFIG_VTUNE_DIR 00:07:42.693 #define SPDK_CONFIG_WERROR 1 00:07:42.693 #define SPDK_CONFIG_WPDK_DIR 00:07:42.693 #undef SPDK_CONFIG_XNVME 00:07:42.693 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:42.693 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:42.694 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1618127 ]] 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1618127 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3AFUHd 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.3AFUHd/tests/vfio /tmp/spdk.3AFUHd 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:42.695 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=53017202688 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730607104 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8713404416 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30860537856 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340129792 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30863769600 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865305600 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=1536000 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:42.696 * Looking for test storage... 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=53017202688 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=10927996928 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:42.696 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:07:42.696 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:42.955 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.956 --rc genhtml_branch_coverage=1 00:07:42.956 --rc genhtml_function_coverage=1 00:07:42.956 --rc genhtml_legend=1 00:07:42.956 --rc geninfo_all_blocks=1 00:07:42.956 --rc geninfo_unexecuted_blocks=1 00:07:42.956 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.956 ' 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.956 --rc genhtml_branch_coverage=1 00:07:42.956 --rc genhtml_function_coverage=1 00:07:42.956 --rc genhtml_legend=1 00:07:42.956 --rc geninfo_all_blocks=1 00:07:42.956 --rc geninfo_unexecuted_blocks=1 00:07:42.956 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.956 ' 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.956 --rc genhtml_branch_coverage=1 00:07:42.956 --rc genhtml_function_coverage=1 00:07:42.956 --rc genhtml_legend=1 00:07:42.956 --rc geninfo_all_blocks=1 00:07:42.956 --rc geninfo_unexecuted_blocks=1 00:07:42.956 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.956 ' 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.956 --rc genhtml_branch_coverage=1 00:07:42.956 --rc genhtml_function_coverage=1 00:07:42.956 --rc genhtml_legend=1 00:07:42.956 --rc geninfo_all_blocks=1 00:07:42.956 --rc geninfo_unexecuted_blocks=1 00:07:42.956 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:42.956 ' 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:42.956 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:42.956 20:09:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:42.956 [2024-11-26 20:09:55.748936] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:42.956 [2024-11-26 20:09:55.749022] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618308 ] 00:07:42.956 [2024-11-26 20:09:55.832258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.956 [2024-11-26 20:09:55.875424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.215 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.215 INFO: Seed: 3070635289 00:07:43.215 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:43.215 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:43.215 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:43.215 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.215 #2 INITED exec/s: 0 rss: 66Mb 00:07:43.215 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.215 This may also happen if the target rejected all inputs we tried so far 00:07:43.215 [2024-11-26 20:09:56.115655] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:43.733 NEW_FUNC[1/671]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:43.733 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:43.733 #11 NEW cov: 11019 ft: 11155 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 4 InsertByte-CrossOver-InsertByte-CopyPart- 00:07:43.992 NEW_FUNC[1/5]: 0x18cc118 in nvme_pcie_qpair_submit_tracker /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:660 00:07:43.992 NEW_FUNC[2/5]: 0x18cf358 in nvme_pcie_copy_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:643 00:07:43.992 #12 NEW cov: 11210 ft: 14307 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:44.252 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:44.252 #16 NEW cov: 11230 ft: 14524 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 4 InsertByte-CrossOver-EraseBytes-InsertRepeatedBytes- 00:07:44.510 #17 NEW cov: 11230 ft: 15916 corp: 5/25b lim: 6 exec/s: 17 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:44.511 #18 NEW cov: 11230 ft: 16598 corp: 6/31b lim: 6 exec/s: 18 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:07:44.769 #19 NEW cov: 11230 ft: 16836 corp: 7/37b lim: 6 exec/s: 19 rss: 75Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:45.028 #25 NEW cov: 11230 ft: 17610 corp: 8/43b lim: 6 exec/s: 25 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:45.028 #26 NEW cov: 11237 ft: 18117 corp: 9/49b lim: 6 exec/s: 26 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:45.287 #27 NEW cov: 11237 ft: 18140 corp: 10/55b lim: 6 exec/s: 13 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:07:45.287 #27 DONE cov: 11237 ft: 18140 corp: 10/55b lim: 6 exec/s: 13 rss: 76Mb 00:07:45.287 Done 27 runs in 2 second(s) 00:07:45.287 [2024-11-26 20:09:58.161807] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:45.588 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:45.588 20:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:45.588 [2024-11-26 20:09:58.421083] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:45.588 [2024-11-26 20:09:58.421152] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618728 ] 00:07:45.588 [2024-11-26 20:09:58.499998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.935 [2024-11-26 20:09:58.542182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.935 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.935 INFO: Seed: 1452690533 00:07:45.935 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:45.936 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:45.936 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:45.936 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.936 #2 INITED exec/s: 0 rss: 67Mb 00:07:45.936 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.936 This may also happen if the target rejected all inputs we tried so far 00:07:45.936 [2024-11-26 20:09:58.793756] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:46.221 [2024-11-26 20:09:58.853636] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:46.221 [2024-11-26 20:09:58.853663] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:46.221 [2024-11-26 20:09:58.853682] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:46.480 NEW_FUNC[1/677]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:46.480 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:46.480 #56 NEW cov: 11187 ft: 11082 corp: 2/5b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 4 CrossOver-CopyPart-CrossOver-CopyPart- 00:07:46.480 [2024-11-26 20:09:59.326907] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:46.480 [2024-11-26 20:09:59.326942] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:46.480 [2024-11-26 20:09:59.326961] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:46.739 NEW_FUNC[1/1]: 0x21286a8 in spdk_u32log2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/util/math.c:20 00:07:46.739 #57 NEW cov: 11209 ft: 13745 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:46.739 [2024-11-26 20:09:59.517873] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:46.739 [2024-11-26 20:09:59.517897] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:46.739 [2024-11-26 20:09:59.517915] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:46.739 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:46.739 #58 NEW cov: 11226 ft: 14963 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:46.998 [2024-11-26 20:09:59.711766] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:46.998 [2024-11-26 20:09:59.711790] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:46.998 [2024-11-26 20:09:59.711807] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:46.998 #61 NEW cov: 11226 ft: 15341 corp: 5/17b lim: 4 exec/s: 61 rss: 75Mb L: 4/4 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:07:46.998 [2024-11-26 20:09:59.907065] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:46.998 [2024-11-26 20:09:59.907088] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:46.998 [2024-11-26 20:09:59.907105] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:47.257 #62 NEW cov: 11226 ft: 15915 corp: 6/21b lim: 4 exec/s: 62 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:07:47.257 [2024-11-26 20:10:00.099202] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:47.257 [2024-11-26 20:10:00.099234] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:47.257 [2024-11-26 20:10:00.099253] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:47.516 #68 NEW cov: 11226 ft: 16155 corp: 7/25b lim: 4 exec/s: 68 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:47.516 [2024-11-26 20:10:00.284639] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:47.516 [2024-11-26 20:10:00.284665] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:47.516 [2024-11-26 20:10:00.284683] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:47.516 #69 NEW cov: 11226 ft: 17014 corp: 8/29b lim: 4 exec/s: 69 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:07:47.775 [2024-11-26 20:10:00.470834] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:47.775 [2024-11-26 20:10:00.470857] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:47.775 [2024-11-26 20:10:00.470874] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:47.775 #70 NEW cov: 11233 ft: 17542 corp: 9/33b lim: 4 exec/s: 70 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:47.775 [2024-11-26 20:10:00.660991] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:47.775 [2024-11-26 20:10:00.661014] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:47.775 [2024-11-26 20:10:00.661031] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:48.033 #71 NEW cov: 11233 ft: 17602 corp: 10/37b lim: 4 exec/s: 35 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:48.033 #71 DONE cov: 11233 ft: 17602 corp: 10/37b lim: 4 exec/s: 35 rss: 75Mb 00:07:48.033 Done 71 runs in 2 second(s) 00:07:48.033 [2024-11-26 20:10:00.800801] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:48.292 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:48.292 20:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:48.292 [2024-11-26 20:10:01.061510] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:48.292 [2024-11-26 20:10:01.061582] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619262 ] 00:07:48.292 [2024-11-26 20:10:01.140126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.292 [2024-11-26 20:10:01.180030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.551 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.551 INFO: Seed: 4082666524 00:07:48.551 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:48.551 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:48.551 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:48.551 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.551 #2 INITED exec/s: 0 rss: 68Mb 00:07:48.551 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.551 This may also happen if the target rejected all inputs we tried so far 00:07:48.551 [2024-11-26 20:10:01.421448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:48.809 [2024-11-26 20:10:01.495571] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:49.068 NEW_FUNC[1/677]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:49.068 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:49.068 #47 NEW cov: 11175 ft: 11147 corp: 2/9b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 5 ChangeByte-InsertByte-CrossOver-InsertByte-CMP- DE: "\000\000\000\003"- 00:07:49.068 [2024-11-26 20:10:01.977855] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:49.068 [2024-11-26 20:10:01.977896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:49.327 NEW_FUNC[1/1]: 0x158fc78 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3098 00:07:49.327 #51 NEW cov: 11199 ft: 14217 corp: 3/17b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 4 ChangeByte-PersAutoDict-EraseBytes-CopyPart- DE: "\000\000\000\003"- 00:07:49.327 [2024-11-26 20:10:02.186222] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:49.327 [2024-11-26 20:10:02.186254] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:49.586 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:49.586 #52 NEW cov: 11219 ft: 16014 corp: 4/25b lim: 8 exec/s: 0 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:07:49.586 [2024-11-26 20:10:02.389286] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:49.586 #53 NEW cov: 11219 ft: 17098 corp: 5/33b lim: 8 exec/s: 53 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:49.847 [2024-11-26 20:10:02.573766] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:49.847 #54 NEW cov: 11219 ft: 17526 corp: 6/41b lim: 8 exec/s: 54 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:07:49.847 [2024-11-26 20:10:02.759647] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:49.847 [2024-11-26 20:10:02.759677] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:50.106 #55 NEW cov: 11219 ft: 17796 corp: 7/49b lim: 8 exec/s: 55 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:07:50.106 [2024-11-26 20:10:02.942177] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:50.364 #71 NEW cov: 11219 ft: 17976 corp: 8/57b lim: 8 exec/s: 71 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:07:50.364 [2024-11-26 20:10:03.127566] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:50.364 #72 NEW cov: 11226 ft: 18056 corp: 9/65b lim: 8 exec/s: 72 rss: 77Mb L: 8/8 MS: 1 PersAutoDict- DE: "\000\000\000\003"- 00:07:50.622 [2024-11-26 20:10:03.315148] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:50.622 [2024-11-26 20:10:03.315179] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:50.622 #73 NEW cov: 11226 ft: 18324 corp: 10/73b lim: 8 exec/s: 36 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:50.622 #73 DONE cov: 11226 ft: 18324 corp: 10/73b lim: 8 exec/s: 36 rss: 77Mb 00:07:50.622 ###### Recommended dictionary. ###### 00:07:50.622 "\000\000\000\003" # Uses: 3 00:07:50.622 ###### End of recommended dictionary. ###### 00:07:50.622 Done 73 runs in 2 second(s) 00:07:50.622 [2024-11-26 20:10:03.440799] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:50.881 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:50.881 20:10:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:50.881 [2024-11-26 20:10:03.698800] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:50.881 [2024-11-26 20:10:03.698869] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619792 ] 00:07:50.881 [2024-11-26 20:10:03.778078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.140 [2024-11-26 20:10:03.819529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.140 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.140 INFO: Seed: 2424701506 00:07:51.140 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:51.140 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:51.140 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:51.140 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.140 #2 INITED exec/s: 0 rss: 68Mb 00:07:51.140 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.140 This may also happen if the target rejected all inputs we tried so far 00:07:51.140 [2024-11-26 20:10:04.057739] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:51.657 NEW_FUNC[1/676]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:51.657 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:51.657 #55 NEW cov: 11148 ft: 11128 corp: 2/33b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 3 ChangeBit-InsertRepeatedBytes-CopyPart- 00:07:51.915 NEW_FUNC[1/1]: 0x13852d8 in nvmf_bdev_ctrlr_read_cmd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr_bdev.c:336 00:07:51.915 #61 NEW cov: 11197 ft: 14463 corp: 3/65b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:52.175 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:52.175 #62 NEW cov: 11217 ft: 15501 corp: 4/97b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:52.433 #63 NEW cov: 11217 ft: 16717 corp: 5/129b lim: 32 exec/s: 63 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:07:52.433 #64 NEW cov: 11217 ft: 17040 corp: 6/161b lim: 32 exec/s: 64 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:52.691 #65 NEW cov: 11217 ft: 17873 corp: 7/193b lim: 32 exec/s: 65 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:52.950 #66 NEW cov: 11217 ft: 18364 corp: 8/225b lim: 32 exec/s: 66 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:52.950 #67 NEW cov: 11224 ft: 18437 corp: 9/257b lim: 32 exec/s: 67 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:53.209 #73 NEW cov: 11224 ft: 18494 corp: 10/289b lim: 32 exec/s: 73 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:53.468 #74 NEW cov: 11224 ft: 18809 corp: 11/321b lim: 32 exec/s: 37 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:53.468 #74 DONE cov: 11224 ft: 18809 corp: 11/321b lim: 32 exec/s: 37 rss: 77Mb 00:07:53.468 Done 74 runs in 2 second(s) 00:07:53.468 [2024-11-26 20:10:06.210815] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:53.728 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:53.728 20:10:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:53.728 [2024-11-26 20:10:06.468216] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:53.728 [2024-11-26 20:10:06.468280] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620173 ] 00:07:53.728 [2024-11-26 20:10:06.549619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.728 [2024-11-26 20:10:06.590208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.988 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.988 INFO: Seed: 900741520 00:07:53.988 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:53.988 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:53.988 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:53.988 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.988 #2 INITED exec/s: 0 rss: 67Mb 00:07:53.988 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.988 This may also happen if the target rejected all inputs we tried so far 00:07:53.988 [2024-11-26 20:10:06.829481] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:54.505 NEW_FUNC[1/677]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:54.505 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:54.505 #17 NEW cov: 11184 ft: 11123 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 InsertByte-InsertRepeatedBytes-CopyPart-ChangeByte-CMP- DE: "\000\000"- 00:07:54.764 #18 NEW cov: 11198 ft: 14547 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:54.764 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:54.764 #19 NEW cov: 11215 ft: 15925 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:55.022 #20 NEW cov: 11215 ft: 16306 corp: 5/129b lim: 32 exec/s: 20 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:55.281 #21 NEW cov: 11215 ft: 16890 corp: 6/161b lim: 32 exec/s: 21 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:55.281 #22 NEW cov: 11215 ft: 17119 corp: 7/193b lim: 32 exec/s: 22 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:55.539 #23 NEW cov: 11215 ft: 17435 corp: 8/225b lim: 32 exec/s: 23 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:55.798 #29 NEW cov: 11215 ft: 17549 corp: 9/257b lim: 32 exec/s: 29 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:56.058 #35 NEW cov: 11222 ft: 17898 corp: 10/289b lim: 32 exec/s: 35 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:56.058 #36 NEW cov: 11222 ft: 17949 corp: 11/321b lim: 32 exec/s: 18 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:56.058 #36 DONE cov: 11222 ft: 17949 corp: 11/321b lim: 32 exec/s: 18 rss: 77Mb 00:07:56.058 ###### Recommended dictionary. ###### 00:07:56.058 "\000\000" # Uses: 2 00:07:56.058 ###### End of recommended dictionary. ###### 00:07:56.058 Done 36 runs in 2 second(s) 00:07:56.058 [2024-11-26 20:10:08.935860] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:56.317 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:56.318 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:56.318 20:10:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:56.318 [2024-11-26 20:10:09.194056] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:56.318 [2024-11-26 20:10:09.194135] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620643 ] 00:07:56.576 [2024-11-26 20:10:09.274228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.576 [2024-11-26 20:10:09.314966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.576 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.576 INFO: Seed: 3625761441 00:07:56.837 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:56.837 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:56.837 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:56.837 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.837 #2 INITED exec/s: 0 rss: 67Mb 00:07:56.837 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.837 This may also happen if the target rejected all inputs we tried so far 00:07:56.837 [2024-11-26 20:10:09.558940] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:56.837 [2024-11-26 20:10:09.610659] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.837 [2024-11-26 20:10:09.610694] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.097 NEW_FUNC[1/675]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:57.097 NEW_FUNC[2/675]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:57.097 #3 NEW cov: 11170 ft: 11158 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:07:57.355 [2024-11-26 20:10:10.070112] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.355 [2024-11-26 20:10:10.070162] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.355 NEW_FUNC[1/3]: 0x1925b78 in _nvme_qpair_complete_abort_queued_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:593 00:07:57.355 NEW_FUNC[2/3]: 0x1927248 in spdk_nvme_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:764 00:07:57.355 #9 NEW cov: 11211 ft: 14602 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:57.355 [2024-11-26 20:10:10.259102] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.355 [2024-11-26 20:10:10.259135] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.613 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:57.613 #10 NEW cov: 11228 ft: 15893 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:07:57.613 [2024-11-26 20:10:10.454776] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.613 [2024-11-26 20:10:10.454809] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.871 #16 NEW cov: 11228 ft: 16409 corp: 5/53b lim: 13 exec/s: 16 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:57.871 [2024-11-26 20:10:10.638896] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.871 [2024-11-26 20:10:10.638925] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.871 #17 NEW cov: 11228 ft: 16548 corp: 6/66b lim: 13 exec/s: 17 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:07:58.129 [2024-11-26 20:10:10.814764] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.129 [2024-11-26 20:10:10.814794] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.129 #18 NEW cov: 11228 ft: 17330 corp: 7/79b lim: 13 exec/s: 18 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:58.129 [2024-11-26 20:10:10.990356] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.129 [2024-11-26 20:10:10.990387] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.387 #24 NEW cov: 11228 ft: 17342 corp: 8/92b lim: 13 exec/s: 24 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:07:58.387 [2024-11-26 20:10:11.166012] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.387 [2024-11-26 20:10:11.166043] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.387 #25 NEW cov: 11228 ft: 17821 corp: 9/105b lim: 13 exec/s: 25 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:58.645 [2024-11-26 20:10:11.347154] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.645 [2024-11-26 20:10:11.347186] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.645 #26 NEW cov: 11235 ft: 18168 corp: 10/118b lim: 13 exec/s: 26 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:07:58.645 [2024-11-26 20:10:11.525849] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.645 [2024-11-26 20:10:11.525880] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.903 #32 pulse cov: 11235 ft: 18239 corp: 10/118b lim: 13 exec/s: 16 rss: 77Mb 00:07:58.903 #32 NEW cov: 11235 ft: 18239 corp: 11/131b lim: 13 exec/s: 16 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:07:58.903 #32 DONE cov: 11235 ft: 18239 corp: 11/131b lim: 13 exec/s: 16 rss: 77Mb 00:07:58.903 Done 32 runs in 2 second(s) 00:07:58.903 [2024-11-26 20:10:11.648793] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:59.161 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:59.161 20:10:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:59.161 [2024-11-26 20:10:11.910147] Starting SPDK v25.01-pre git sha1 7cc16c961 / DPDK 24.03.0 initialization... 00:07:59.161 [2024-11-26 20:10:11.910217] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621184 ] 00:07:59.161 [2024-11-26 20:10:11.988790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.161 [2024-11-26 20:10:12.028999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.420 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.420 INFO: Seed: 2042777545 00:07:59.420 INFO: Loaded 1 modules (386754 inline 8-bit counters): 386754 [0x2c2b80c, 0x2c89ece), 00:07:59.420 INFO: Loaded 1 PC tables (386754 PCs): 386754 [0x2c89ed0,0x3270af0), 00:07:59.420 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:59.421 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.421 #2 INITED exec/s: 0 rss: 67Mb 00:07:59.421 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.421 This may also happen if the target rejected all inputs we tried so far 00:07:59.421 [2024-11-26 20:10:12.267470] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:59.421 [2024-11-26 20:10:12.310682] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.421 [2024-11-26 20:10:12.310714] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.938 NEW_FUNC[1/677]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:59.938 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:59.938 #29 NEW cov: 11134 ft: 11122 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:59.938 [2024-11-26 20:10:12.791066] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.938 [2024-11-26 20:10:12.791108] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.197 NEW_FUNC[1/1]: 0x15c8c68 in get_nvmf_vfio_user_req /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:5398 00:08:00.197 #35 NEW cov: 11203 ft: 14765 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:08:00.197 [2024-11-26 20:10:12.985045] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.197 [2024-11-26 20:10:12.985078] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.197 NEW_FUNC[1/1]: 0x1c12bc8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:00.197 #36 NEW cov: 11220 ft: 15556 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:00.456 [2024-11-26 20:10:13.181018] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.456 [2024-11-26 20:10:13.181049] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.456 #37 NEW cov: 11220 ft: 17015 corp: 5/37b lim: 9 exec/s: 37 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:00.456 [2024-11-26 20:10:13.376733] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.456 [2024-11-26 20:10:13.376766] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.714 #38 NEW cov: 11220 ft: 17133 corp: 6/46b lim: 9 exec/s: 38 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:00.714 [2024-11-26 20:10:13.563584] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.714 [2024-11-26 20:10:13.563623] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.972 #41 NEW cov: 11220 ft: 17423 corp: 7/55b lim: 9 exec/s: 41 rss: 77Mb L: 9/9 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:00.972 [2024-11-26 20:10:13.763237] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.972 [2024-11-26 20:10:13.763268] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.972 #42 NEW cov: 11220 ft: 17779 corp: 8/64b lim: 9 exec/s: 42 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:01.230 [2024-11-26 20:10:13.951100] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.230 [2024-11-26 20:10:13.951132] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.230 #43 NEW cov: 11227 ft: 18075 corp: 9/73b lim: 9 exec/s: 43 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:01.230 [2024-11-26 20:10:14.140906] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.230 [2024-11-26 20:10:14.140939] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.489 #54 NEW cov: 11227 ft: 18392 corp: 10/82b lim: 9 exec/s: 27 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:08:01.489 #54 DONE cov: 11227 ft: 18392 corp: 10/82b lim: 9 exec/s: 27 rss: 77Mb 00:08:01.489 Done 54 runs in 2 second(s) 00:08:01.489 [2024-11-26 20:10:14.274802] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:01.747 20:10:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:01.747 20:10:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.747 20:10:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.747 20:10:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:01.747 00:08:01.747 real 0m19.178s 00:08:01.747 user 0m27.205s 00:08:01.747 sys 0m1.830s 00:08:01.747 20:10:14 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.747 20:10:14 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:01.747 ************************************ 00:08:01.747 END TEST vfio_llvm_fuzz 00:08:01.747 ************************************ 00:08:01.747 00:08:01.747 real 1m22.777s 00:08:01.747 user 2m6.980s 00:08:01.747 sys 0m9.378s 00:08:01.747 20:10:14 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.747 20:10:14 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:01.748 ************************************ 00:08:01.748 END TEST llvm_fuzz 00:08:01.748 ************************************ 00:08:01.748 20:10:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:08:01.748 20:10:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:08:01.748 20:10:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:08:01.748 20:10:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.748 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:08:01.748 20:10:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:08:01.748 20:10:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:08:01.748 20:10:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:08:01.748 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:08:08.321 INFO: APP EXITING 00:08:08.321 INFO: killing all VMs 00:08:08.321 INFO: killing vhost app 00:08:08.321 INFO: EXIT DONE 00:08:11.606 Waiting for block devices as requested 00:08:11.606 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:11.606 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:11.606 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:11.864 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:11.864 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:11.864 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:12.122 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:12.122 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:12.122 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:12.122 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:12.380 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:12.380 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:12.380 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:12.638 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:12.638 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:12.638 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:12.897 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:17.083 Cleaning 00:08:17.083 Removing: /dev/shm/spdk_tgt_trace.pid1593191 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1590732 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1591907 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1593191 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1593651 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1594734 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1594756 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1595871 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1595877 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1596307 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1596639 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1596960 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1597298 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1597486 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1597676 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1597956 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1598273 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1599115 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1602131 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1602343 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1602622 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1602767 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1603293 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1603444 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1604012 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1604021 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1604315 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1604332 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1604624 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1604629 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1605257 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1605498 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1605639 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1605914 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1606515 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1606947 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1607484 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1607769 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1608429 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1609194 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1609681 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1610211 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1610609 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1611037 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1611566 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1611921 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1612392 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1612930 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1613221 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1613752 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1614196 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1614569 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1615107 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1615454 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1615926 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1616454 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1616750 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1617276 00:08:17.083 Removing: /var/run/dpdk/spdk_pid1617717 00:08:17.084 Removing: /var/run/dpdk/spdk_pid1618308 00:08:17.084 Removing: /var/run/dpdk/spdk_pid1618728 00:08:17.084 Removing: /var/run/dpdk/spdk_pid1619262 00:08:17.084 Removing: /var/run/dpdk/spdk_pid1619792 00:08:17.084 Removing: /var/run/dpdk/spdk_pid1620173 00:08:17.084 Removing: /var/run/dpdk/spdk_pid1620643 00:08:17.084 Removing: /var/run/dpdk/spdk_pid1621184 00:08:17.084 Clean 00:08:17.084 20:10:29 -- common/autotest_common.sh@1453 -- # return 0 00:08:17.084 20:10:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:17.084 20:10:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.084 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 20:10:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:17.084 20:10:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.084 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 20:10:29 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:17.084 20:10:29 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:17.084 20:10:29 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:17.084 20:10:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:17.084 20:10:29 -- spdk/autotest.sh@398 -- # hostname 00:08:17.084 20:10:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:17.084 geninfo: WARNING: invalid characters removed from testname! 00:08:22.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:22.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:25.641 20:10:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:33.759 20:10:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:39.027 20:10:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:44.296 20:10:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:49.708 20:11:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:53.898 20:11:06 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:59.165 20:11:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:59.165 20:11:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:08:59.165 20:11:12 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:08:59.165 20:11:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:59.165 20:11:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:59.165 20:11:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:59.165 + [[ -n 1481531 ]] 00:08:59.165 + sudo kill 1481531 00:08:59.174 [Pipeline] } 00:08:59.189 [Pipeline] // stage 00:08:59.196 [Pipeline] } 00:08:59.209 [Pipeline] // timeout 00:08:59.214 [Pipeline] } 00:08:59.225 [Pipeline] // catchError 00:08:59.230 [Pipeline] } 00:08:59.243 [Pipeline] // wrap 00:08:59.251 [Pipeline] } 00:08:59.266 [Pipeline] // catchError 00:08:59.277 [Pipeline] stage 00:08:59.279 [Pipeline] { (Epilogue) 00:08:59.294 [Pipeline] catchError 00:08:59.297 [Pipeline] { 00:08:59.311 [Pipeline] echo 00:08:59.314 Cleanup processes 00:08:59.322 [Pipeline] sh 00:08:59.607 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:59.607 1629835 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:59.621 [Pipeline] sh 00:08:59.905 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:59.905 ++ grep -v 'sudo pgrep' 00:08:59.905 ++ awk '{print $1}' 00:08:59.905 + sudo kill -9 00:08:59.905 + true 00:08:59.918 [Pipeline] sh 00:09:00.207 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:00.207 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:00.207 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:01.582 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:13.792 [Pipeline] sh 00:09:14.076 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:14.076 Artifacts sizes are good 00:09:14.090 [Pipeline] archiveArtifacts 00:09:14.100 Archiving artifacts 00:09:14.215 [Pipeline] sh 00:09:14.494 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:14.510 [Pipeline] cleanWs 00:09:14.521 [WS-CLEANUP] Deleting project workspace... 00:09:14.521 [WS-CLEANUP] Deferred wipeout is used... 00:09:14.526 [WS-CLEANUP] done 00:09:14.529 [Pipeline] } 00:09:14.547 [Pipeline] // catchError 00:09:14.561 [Pipeline] sh 00:09:14.880 + logger -p user.info -t JENKINS-CI 00:09:14.888 [Pipeline] } 00:09:14.901 [Pipeline] // stage 00:09:14.907 [Pipeline] } 00:09:14.921 [Pipeline] // node 00:09:14.927 [Pipeline] End of Pipeline 00:09:14.960 Finished: SUCCESS